From: aidotengineer
Super Protocol is a confidential AI cloud and marketplace designed for secure collaboration and monetization of AI models, data, and compute [00:08:26]. It aims to make confidential AI a practical reality for developers [00:08:11].
The Problem Super Protocol Solves
AI’s transformation across industries like healthcare, finance, automation, and digital marketing is hindered by a lack of trust [00:00:09]. Traditional cloud setups fall short because they are built on trust and legal contracts rather than provable guarantees [00:08:01]. This leads to several challenges:
- Sensitive Data Use Running models on sensitive data without handing it over is difficult [00:00:18]. Data is most vulnerable during processing, such as training, fine-tuning, or inference [00:01:26].
- Proprietary Model Deployment Deploying proprietary models without losing control of the intellectual property is a major concern [00:00:24].
- Collaboration in Non-Deterministic Environments Collaborating, especially in non-deterministic environments, often relies on blind trust [00:00:29].
- Provenance Proving that a model was truly trained where and how it was claimed to be, and on specific data sets, is challenging [00:07:17].
Foundation: Confidential AI and Trusted Execution Environments (TEEs)
Super Protocol makes confidential AI real [00:00:36]. The core technology behind confidential AI is confidential computing [00:01:10].
Trusted Execution Environments (TEEs)
TEEs solve the problem of data and model vulnerability during processing [00:01:21].
- Definition A TEE is a secure and isolated part of the processor, such as Intel TDX, AMD SEV-SMP, or Nvidia GPU TEEs [00:01:43]. It creates a “confidential environment” where code and data are protected during execution [00:01:53].
- Protection The chip itself provides isolation using built-in instructions [00:02:03]. Once a workload enters this environment, it’s protected in memory, invisible to the host OS, hypervisor, anyone with system access, or even the hardware owner [00:02:08].
- Attestation A TEE generates a cryptographic attestation, which is a signed proof that the workload ran inside verified hardware using unmodified code [00:02:24]. This attestation assures that the workload is truly protected by the hardware and confirms what the workload actually is, and that it’s running in a real, properly manufactured TEE-capable chip [00:02:40]. This allows running sensitive computations securely and proving they ran as intended [00:03:14].
Key Principles and Features of Super Protocol
Super Protocol is built on core principles: GPUless, trustless, and limitless [00:00:43].
GPUless
This means removing dependency, not GPUs [00:10:31]. Users can run accelerated AI workloads across independent GPU nodes without being locked into any cloud vendor or centralized provider [00:10:37]. Users don’t need to buy or rent GPUs for longer than needed [00:10:48].
Trustless
Super Protocol replaces blind trust with built-in cryptographic proofs [00:31:43]. Every run is verifiable independently and transparently down to the hardware level [00:31:49].
- Every workload produces a cryptographic proof showing what ran, where, and how, without exposing the actual workload data [00:32:00].
- This attestation verifies that a model executed in a real TEE using unmodified code on verified hardware inside a secure open-source runtime [00:32:35].
- The protocol prevents sensitive data from loading or running if attempts are made to bypass security [00:32:58].
Limitless
Super Protocol removes legal, technical, and organizational barriers [00:11:23]. Users are not limited by policy, regulation, or infrastructure constraints [00:11:29]. It enables training, deployment, and monetization of AI across organizations and jurisdictions with full confidentiality and ownership [00:12:18]. It supports agentic, non-deterministic AI where autonomous agents interact and evolve without predefined scripts or centralized control [00:11:58].
Components and Architecture
Super Protocol’s architecture is built on several key components and principles:
- TE Agnostic Infrastructure It runs on Intel, Nvidia, and AMD TEEs, with plans to support future platforms as major chip makers include TEEs [00:08:41].
- Edge-Ready Architecture It has validated ARM confidential computing compatibility, aiming to deliver end-to-end confidential AI from personal edge devices to the cloud [00:09:01].
- Swarm Computing Principles It scales across distributed GPU nodes, ensuring no single point of failure and automatic workload redistribution during server downtime [00:09:27].
- Decentralized Orchestration It is fully decentralized with no human intervention, entirely orchestrated by smart contracts on BNB chain [00:09:40].
- Zero Barrier Entry Users do not need TEE expertise to run or attest workloads [00:09:52].
- Open Source Protocol All parts of Super Protocol will be open source, as it is a protocol, not a service [00:10:03].
- Confidential Virtual Machine (CVM) A CVM is launched once and handles multiple jobs, contacting an open-source certification authority for remote attestation on boot [00:34:49].
- Trusted Loader An open-source security mechanism inside the CVM that is attested and creates a signed key pair, checking every component before data enters [00:35:18]. It blocks the job if any check fails to safeguard data and models [00:35:33].
- SPCTL CLI Tool A command-line interface tool used for tasks like uploading archives to decentralized storage [00:28:31], archiving and uploading datasets (which encrypts files during upload) [00:36:41], and decrypting results [00:38:55].
- SuperAI Marketplace A confidential and decentralized platform where providers of AI models, datasets, and confidential computing hardware can connect with clients [00:18:10]. It supports various monetization scenarios [00:19:01].
Real-World Applications
Super Protocol opens up new possibilities for AI development and deployment:
Healthcare
Hospitals and labs are reluctant to share raw data sets due to tight controls, high generation costs, and siloed access [00:04:06]. This makes training medical AI models on real data take months of negotiation [00:04:28]. Super Protocol addresses this by enabling training on sensitive data without exposure [00:04:41].
- Case Study: BEAL (Brain Electrophysiology Laboratory) BEAL needed to submit perfect documentation for FDA approval of a new epilepsy diagnostic device, which traditionally took weeks of manual audits and carried risks of exposing trade secrets [00:15:21]. By using Super Protocol’s confidential AI cloud with Titonix’s AI-powered audit tool, the audit time dropped from weeks to 1-2 hours with zero risk of leaks, cutting out 120-day review delays [00:16:19].
Personal AI Agents
Mass adoption of personal AI agents that manage inboxes, calendars, or documents is hindered by concerns about deep access to private, sensitive data [00:04:50]. Confidentiality is the missing piece for their real-world adoption [00:05:34].
Digital Marketing and Custom Analytics
Fine-tuning models on real user behavior data (tracking interactions with websites, content) often risks upsetting regulators and auditors due to privacy laws (GDPR, CCPA) and ethics [00:05:47]. Super Protocol bridges the gap between what’s technically possible and what’s allowed [00:06:14].
- Case Study: Realize & Mars Realize, an AI company measuring ad reactions via facial expressions, needed more biometric video from partners [00:13:05]. Privacy laws made providers reluctant to share [00:13:32]. Using Super Protocol’s confidential AI cloud, training ran inside TEEs, with data and models staying secure and inaccessible [00:13:45]. Providers shared four times more footage, boosting the training set by 319%, leading to a 75% accuracy and a 3-5% sales increase for Mars [00:14:26].
AI Model Monetization
Developers building domain-specific models want to monetize them without giving away their model or weights, while customers are unwilling to expose sensitive data for testing or production [00:06:22]. With confidential AI, neither party has to relinquish control [00:07:04].
Super Protocol in Action: Demos and Workflows
SuperAI Marketplace
The SuperAI Marketplace allows for secure collaboration and monetization of AI models, data, and compute [00:17:40].
- It’s built on a confidential and decentralized architecture with no centralized components [00:18:13].
- A blockchain-based ecosystem manages relationships and financial settlements between model/data providers, hardware providers, and clients [00:18:19].
- Models remain private; authors retain full control and ownership and can be leased but not downloaded [00:18:33].
- Models are deployed in a TEE and accessible via link or API [00:18:51].
- Monetization scenarios include per hour, fixed, and revenue sharing [00:19:03].
- Deployment involves selecting a model (e.g., DeepSeek) and computing configuration, which is then ordered and prepared on the blockchain within a confidential environment [00:19:40].
Agentic AI and Automated Workflows
Super Protocol enables building secure automated AI workflows for processing sensitive data, like medical data, using tools like N8N deployed on the protocol [00:21:50].
- Workflows run inside TEEs, inaccessible even to server admins or Super Protocol, combining low-code automation with decentralized infrastructure for fully confidential, compliant, and verifiable AI [00:22:01].
- An example workflow for medical data: A doctor uploads an X-ray and patient data via a protected web form [00:22:17]. The automated workflow cleans data, invokes an AI model to analyze the X-ray, generates a structured medical report, and emails it securely to the doctor [00:22:33].
- Credentials for API keys and email sending are securely stored and isolated within the TEE [00:23:26]. Personal data is separated from diagnostic input before AI processing to ensure privacy [00:24:15].
Distributed Inference (Scaling)
Super Protocol enables distributed inference using VLLM across multiple GPU servers without relying on any single provider, demonstrating its “GPUless” capability [00:26:19].
- Every VLLM node runs inside a confidential VM powered by TEE hardware, interconnected over a private overlay network [00:27:04].
- Data, model weights, and intermediate activations are decrypted and processed only inside each confidential environment, with all inter-node communication encrypted [00:27:12].
- This setup provides both security through TEE hardware and improved performance through parallel processing [00:31:26].
Multi-Party Training with Cryptographic Proofs
Super Protocol allows multiple parties to collaborate on training a model using their respective sensitive datasets and engines without exposing any party’s intellectual property or data [00:33:13].
- All inputs (data sets, training engine) run inside a TEE, where no one, not even the cloud host or participants, can access what’s inside [00:33:50].
- Training is fully automated by the verified engine, the Super Protocol certification center, and smart contracts on BNB chain Layer 2 [00:34:17].
- Before training begins, the trusted loader creates an integrity report signed inside the TEE, which is later published on OPBNB as part of the order report [00:39:53]. This provides public, tamper-proof evidence that the job ran in a certified environment with approved inputs [00:40:06].
- After every job, all raw inputs are wiped [00:41:07]. An order report is published on-chain, proving the run was genuine [00:41:10]. This report can be verified by any participant, eliminating the need for NDAs or manual audits [00:42:06].