Confidential GPU Attestation and Monetization for Secure AI Workloads
The Targon Virtual Machine (TVM) introduces a robust architecture for confidential computing designed specifically for secure AI workloads, enabling pretraining, posttraining, and inference operations to be securely executed on bare-metal servers. TVM achieves verifiable hardware and GPU attestation, ensuring privacy and integrity of deployed AI models. With integrated attestation layers leveraging NVIDIA's nvTrust framework, TVM provides cryptographic proofs of hardware security, facilitating trust among stakeholders and enabling monetization opportunities within confidential subnets.
Artificial Intelligence (AI) has rapidly evolved into a critical infrastructure, necessitating stringent privacy and security guarantees, especially for sensitive models and data. The Targon Virtual Machine (TVM) addresses this by deploying Confidential Virtual Machines (CVMs) on bare-metal hardware, providing end-to-end security, integrity, and attestation of GPUs. Utilizing advanced attestation technologies, including NVIDIA’s nvTrust, TVM ensures that trained AI models remain confidential and secure, enabling controlled monetization of trained models in secure subnets.
The TVM architecture integrates multiple layers of verification, attestation, and deployment:
Host Verification evaluates the physical server’s readiness for confidential computing, ensuring compatibility with Intel TDX or AMD SEV technologies, validating TPM 2.0 modules, Secure Boot status, BIOS configurations, and kernel-level protections against known CPU vulnerabilities. It generates a comprehensive JSON report with hardware identification, crucial for system attestation.
This service securely receives attestation reports from Host Verification, validates them against defined security criteria, and decides whether the host is trustworthy. Upon successful attestation, the service initiates automatic generation of Confidential Virtual Machines (CVMs), tracking progress and providing secure VM downloads. The service acts as a trusted intermediary, facilitating secure deployment workflows.
Inside each CVM, GPU attestation is performed to confirm GPU integrity and firmware authenticity. Leveraging NVIDIA's nvTrust SDK, the CVM GPU Attestation service:
The Manifold SDK standardizes data models, constants, and APIs across TVM's components, simplifying integration and ensuring consistent communication between verification and attestation services.
The end-to-end confidential GPU attestation workflow is as follows:
Step 1: Host Preparation
Step 2: Host Attestation
Step 3: CVM Deployment
Step 4: GPU Attestation
Step 5: Secure AI Model Execution and Monetization
The CVM GPU Attestation leverages NVIDIA's nvTrust framework through a robust
Python wrapper (attestation_wrapper.py
). Key features include:
A Go-based HTTP service wraps the Python-based nvTrust interaction, exposing a RESTful API. Named pipe IPC provides clean separation between attestation data and logging, facilitating error handling and reliability.
GPU attestation results are provided in structured JSON:
{
"success": true,
"nonce": "nonce_value",
"token": "jwt_attestation_token",
"claims": {
/* JWT claims */
},
"validated": true,
"gpus": [
{
"id": "GPU-0",
"model": "NVIDIA H100",
"claims": {
/* GPU-specific claims */
}
}
]
}
This format enables straightforward integration with downstream services or blockchain-based validation mechanisms.
The Host Verification tool ensures comprehensive hardware attestation through:
The attestation service evaluates critical sections of submitted host reports, deciding on CVM provisioning based on pre-defined security standards:
Upon successful verification, VM provisioning is automatically initiated.
The TVM framework enables secure monetization of AI models by ensuring that proprietary model code and weights remain confidential and are executed solely within attested, cryptographically secure environments. By leveraging cryptographic attestation combined with GPU-level verification via NVIDIA’s nvTrust, model developers can confidently deploy valuable proprietary AI models without risking intellectual property leakage.
Specifically, TVM facilitates monetization by enabling:
Secure Model Deployment:
Model developers can securely train, fine-tune, and serve their AI models
privately, ensuring the model weights and associated intellectual property are
never exposed.
Trustworthy Inference Consumption:
Users and enterprises benefit from verifiable guarantees of hardware integrity
and execution environment confidentiality, essential for deploying AI
workloads in sensitive, regulated, or mission-critical applications.
Attestation-based Monetization:
Trained AI models can be directly monetized within subnet marketplaces through
attestation-based access control, creating secure, verifiable economic
relationships between model providers and consumers without ever compromising
model confidentiality.
This secure environment unlocks new business opportunities, fostering decentralized AI marketplaces built on trust and verifiable security.
TVM supports streamlined deployment, leveraging containerization and cloud orchestration technologies:
TVM’s design emphasizes robust security:
The Targon Virtual Machine (TVM) establishes a secure, confidential computing framework specifically tailored for AI workloads, providing robust hardware and GPU attestation through integration with NVIDIA's nvTrust SDK. By ensuring confidentiality and integrity, TVM enables trustworthy monetization of AI models, fostering secure, decentralized AI marketplaces.
TVM: Confidential Computing for Trusted AI Monetization
Empowering Secure AI Innovation through Trusted Hardware and GPU Attestation.
© 2025 Manifold Labs, Inc. All Rights Reserved