goInception Customization in Practice: Finally Achieved Automated SQL Review for OB-MySQL Tenants

goInception Customization in Practice: Finally Achieved Automated SQL Review for OB-MySQL Tenants

Automated SQL Review for OB-MySQL Tenants with goInception Customization

image

---

Business Pain Point: The Review Dilemma

OceanBase (OB) has become a core part of many large-scale database deployments.

Work order platforms lower the barrier for business access and speed up OB adoption.

However, industry-standard SQL review tools like goInception fail to handle OB-MySQL’s distributed execution plan format. Specifically:

  • `est_rows` missing – row count estimation cannot be extracted.
  • High-risk SQL undetected – large table DML and full table scans slip through.
  • Critical operational risks for DBAs.

This article provides an end-to-end solution:

Problem identification → Technical modification → Deployment verification

You’ll see deep goInception customization to parse OB execution plans, plus a Python integration layer.

---

Key Difference: OB-MySQL vs Native MySQL Execution Plans

OB-MySQL’s plans differ fundamentally:

  • Native MySQL: `rows` field stores estimated row count (easy to parse).
  • OB-MySQL: Plan appears as a text table in `Query Plan` — no direct row count field.

Impact on goInception:

  • Rules using `est_rows` fail (No large-table limit checks, no full-scan detection).
  • Work orders skip or block incorrectly.
  • Must maintain compatibility with MySQL/TiDB.

---

Solution Overview: goInception V1.3.0 Customization

Core principles:

  • Extend structs – add `ObPlan` for OB-specific plan text.
  • Multi-source parsing – extract row count from OB `Query Plan` text.
  • Preserve existing logic – keep MySQL/TiDB behavior intact.

This bridges OB’s distributed plan formats with automated SQL review—catching high-risk ops while preserving stability.

---

Step 1: Extend the `ExplainInfo` Struct

_File:_ `session/common.go`

Add `ObPlan` to store OB `Query Plan`:

type ExplainInfo struct {
    SelectType    string   `gorm:"Column:select_type"`
    Table         string   `gorm:"Column:table"`
    Partitions    string   `gorm:"Column:partitions"`
    Type          string   `gorm:"Column:type"`
    PossibleKeys  string   `gorm:"Column:possible_keys"`
    Key           string   `gorm:"Column:key"`
    KeyLen        string   `gorm:"Column:key_len"`
    Ref           string   `gorm:"Column:ref"`
    Rows          int64    `gorm:"Column:rows"`
    Filtered      float32  `gorm:"Column:filtered"`
    Extra         string   `gorm:"Column:Extra"`
    Count         string   `gorm:"Column:count"`
    EstRows       string   `gorm:"Column:estRows"`
    ObPlan        sql.NullString `gorm:"Column:Query Plan"`
}

> Tip: `sql.NullString` avoids null pointer errors, keeping existing MySQL/TiDB reviews unaffected.

---

Step 2: Optimize `getExplainInfo` Function

_File:_ `session/session_inception.go`

Introduce branch logic for:

  • MySQL/TiDB logic
  • OB-specific parsing

Also add error handling for `rawScan`:

var rows []ExplainInfo
if err := s.rawScan(sql, &rows); err != nil {
    if myErr, ok := err.(*mysqlDriver.MySQLError); ok {
        s.appendErrorMessage(myErr.Message)
        if newRecord != nil {
            newRecord.appendErrorMessage(myErr.Message)
        }
    } else {
        s.appendErrorMessage(err.Error())
        if newRecord != nil {
            newRecord.appendErrorMessage(err.Error())
        }
    }
}

Important checks:

  • Distinguish MySQL driver errors from generic ones.
  • Avoid `nil` panics when using `newRecord`.

---

Step 3: Include OB Row Parsing in AffectedRows Calculation

Modification point – handle OB plan parsing with `ObRowAffect()`:

if row.ObPlan.Valid {
    row.Rows = ObRowAffect(row.ObPlan)
}

Apply this in both `max` rule branch and single-row branch.

---

Step 4: Implement `ObRowAffect()` Function

_File:_ `session/session_inception.go`

func ObRowAffect(plan sql.NullString) int64 {
    if !plan.Valid {
        return 0
    }
    r := strings.NewReader(plan.String)
    br := bufio.NewReader(r)
    estrows := make([]string, 0)
    for {
        l, e := br.ReadString('\n')
        if e != nil && len(l) == 0 {
            break
        }
        if strings.HasPrefix(l, "|") {
            r := strings.Split(l, "|")
            estrows = append(estrows, strings.TrimSpace(r[4]))
        }
    }

    var estrowMax int
    for i := 1; i < len(estrows); i++ {
        estrow, err := strconv.Atoi(estrows[i])
        if err != nil {
            continue
        }
        estrowMax = max(estrow, estrowMax)
    }
    return int64(estrowMax)
}

func max(a, b int) int {
    if a > b {
        return a
    }
    return b
}

---

Step 5: Python Integration Layer

In your Python work order system, wrap goInception usage for ease:

class GoInception:
    def __init__(self) -> None:
        self.go_inception_host = "localhost"
        self.go_inception_user = "root"
        self.go_inception_password = ""
        self.go_inception_port = 4000
        self.go_inception_db_name = ""
        self.commit = False

    def check_sql(self, host, user, password, port, database, sqls):
        sql = f"""/*--host='{host}';--port={port};--user={user};--password='{password}';--check=1;max_insert_rows=10;*/
                    inception_magic_start;
                    use `{database}`;
                    {sqls};
                    inception_magic_commit;
                """
        with OpenMysqlDb(
            host=self.go_inception_host,
            user=self.go_inception_user,
            port=self.go_inception_port,
            password=self.go_inception_password,
            db_name=self.go_inception_db_name,
            commit=self.commit,
        ) as conn:
            conn.ping()
            return conn.db_query(sql=sql)

Add execution method:

def execute_sql(self, host, user, password, port, database, sqls,
                backup=0, ignore_warnings=0, fingerprint=0):
    sql = f"""/*--host='{host}';--port={port};--user='{user}';--password='{password}';--execute=1;
               backup={backup};ignore_warnings={ignore_warnings};fingerprint={fingerprint};*/
            inception_magic_start;
            use `{database}`;
            {sqls};
            inception_magic_commit;
            """
    with OpenMysqlDb(
        host=self.go_inception_host,
        user=self.go_inception_user,
        port=self.go_inception_port,
        password=self.go_inception_password,
        db_name=self.go_inception_db_name,
        commit=self.commit,
    ) as conn:
        conn.ping()
        return conn.db_query(sql=sql)

---

Verification

Workflow:

  • Work order submitted
  • Identify tenant type (OB/MySQL)
  • Call Python interface → pass params
  • goInception parses plan → extracts `est_rows`
  • Threshold check → audit result
  • Work order system accepts or rejects

Test cases:

| Scenario | `est_rows` | Threshold | Result |

|--------------------------------------------|------------|-----------|-------------------------|

| OB tenant UPDATE 6000 rows | 6000 | 5000 | Rejected |

| MySQL tenant UPDATE 2000 rows | 2000 | 5000 | Approved |

---

Metrics

  • Accuracy: 100% OB `est_rows` extraction
  • Performance: ≤20ms review time, ≥100 QPS throughput

---

Summary

This project delivers:

  • Deep modification at Golang layer
  • Python wrapper for smooth integration
  • Zero disruption for existing MySQL/TiDB logic

The OB ticket system now automatically performs precise risk interception for OB-MySQL SQL, enhancing safety in distributed environments.

---

You can now compile the modified goInception, integrate the Python wrapper, and deploy the full solution in your production work order system.

Read more

Translate the following blog post title into English, concise and natural. Return plain text only without quotes. 哈佛大学 R 编程课程介绍

Harvard CS50: Introduction to Programming with R Harvard University offers exceptional beginner-friendly computer science courses. We’re excited to announce the release of Harvard CS50’s Introduction to Programming in R, a powerful language widely used for statistical computing, data science, and graphics. This course was developed by Carter Zenke.