Rather than using traditional ETL (Extract, Transform, Load) processes that physically move and reformat data into new tables, KGNN creates a virtual semantic layer that interprets data where it lives.
1. How the "Triples" Graph Works (The 3rd Dimension)
Traditional databases are 2D (rows and columns). KGNN utilizes the Triple format—the atomic building block of a Knowledge Graph—to add a "3rd dimension" of context.
A triple consists of:
Subject: The entity (e.g., "Invoice #1234")
Predicate: The relationship (e.g., "is_billed_to")
Object: The target entity (e.g., "Global Corp")
By breaking data down into these Subject-Predicate-Object relationships, KGNN transforms flat CSV rows or SQL tables into a multi-dimensional web. This "3rd dimension" is the connection itself, allowing the system to understand that a "Customer ID" in SAP is the same "Client" in an Oracle DB without needing to change the underlying data names.
2. Acting as a Data Conversion Layer
Equitus KGNN doesn't just "read" the data; it contextualizes it.
How it handles specific sources:
CSV Files: KGNN ingests raw text or flat files and uses Natural Language Processing (NLP) and machine learning to "extract facts." It identifies entities within the columns and automatically proposes the relationships between them.
Oracle & SAP: These systems often have rigid, proprietary schemas. KGNN uses automated semantic mapping to "point" to these databases.
It maps the relational tables (SQL) to the graph schema, allowing you to query SAP data as if it were part of the same network as your Oracle data. IBM DB2: KGNN is natively optimized for IBM Power10/11 hardware.
It uses IBM's Matrix Multiply Assist (MMA) to perform the complex math required for graph neural networks directly on the server where the DB2 data resides, ensuring extremely low latency and "zero movement" of sensitive data.
3. The Process: From Raw Data to Graph
Ingestion (Schema-less): You "point" KGNN at your CSVs or databases. Unlike traditional systems, you don't have to pre-define what the final table looks like.
Semantic Extraction: The KGNN "Neural" component analyzes the data to find patterns. It identifies that "Entity A" in your CSV and "Entity B" in SAP are likely the same thing.
Automated Triplification: It generates the triples automatically.
Vectorization: Finally, it converts these graph relationships into vectors (mathematical coordinates). This makes the data "AI-ready" for Large Language Models (LLMs) or predictive analytics.
Comparison: Traditional vs. KGNN
|
Feature |
Traditional Databases (SQL/CSV) |
Equitus KGNN (Graph) |
|
Structure |
Rigid
Rows/Columns |
Flexible
Triples |
|
Integration |
Manual
ETL (Slow/Fragile) |
Automated
Semantic Mapping |
|
Context |
Lost
in silos |
Preserved
across systems |
|
Hardware |
General
Purpose / GPU |
Optimized
for IBM Power (No GPU needed) |