SynonAI
  • Welcome to SynonAI
    • What SynonAI?
    • Why SynonAI?
    • How SynonAI?
    • Explore our AI Ecosystem
    • SynonAI Layer Solution
  • Guides
    • Alpha Testnet
      • Set up your wallet
      • Get Faucet
    • Purchasing Genesis Node
      • FAQs
  • SynonAI Node
    • SynonAI Genesis NODE Guide
    • Node Price
  • SynonAI Token
    • Tokenomic
    • Token Utility
    • Emission Mechanisms
  • Fundamentals
    • DeCompute Network
    • Training Platform
    • Mining Rewards Program
    • Referral Rewards Program
  • Important Infomation
    • Roadmap
    • Official Links
  • Policies
    • Terms of Service
    • Data Privacy
    • Privacy Policy
Powered by GitBook
On this page
  • Introduction
  • Key features and technical details of the training platform
  1. Fundamentals

Training Platform

PreviousDeCompute NetworkNextMining Rewards Program

Last updated 1 year ago

Introduction

Core Framework: The training platform serves as the cornerstone of SynonAI's decentralized computing network. It enables users to efficiently and cost-effectively train and refine models by utilizing the idle GPUs of participants globally. The architecture of the platform is designed using cutting-edge technologies and methodologies to facilitate distributed AI model training.

Key features and technical details of the training platform

Decentralized Architecture: The platform utilizes a decentralized network of connected devices, distributing the training workload across multiple GPUs. This decentralized approach reduces the reliance on centralized resources and enables the cost of training a model to be kept low.

Optimized Resource Allocation: SynonAI's smart resource distribution system automatically allocates tasks to the best-suited GPUs in the network, enhancing performance and decreasing training times.

Segmented Training and Unified Modeling: The training platform utilizes sophisticated techniques to split the training data and AI models into smaller, manageable segments that can be processed concurrently by the network's GPUs. It applies methods like data parallelism and model parallelism tailored to the specific demands of the AI model under training. Once processed, these segments are combined by the platform to create the complete AI model, ensuring superior learning results. Federated Learning and parameter averaging are employed to integrate updates from various devices, preserving data privacy.

Enhanced Data Security: The platform incorporates advanced encryption and secure multi-party computation to safeguard user data. Additional security measures, such as differential privacy, further protect the integrity of training data.

Decentralized AI Training Framework: Built on these principles, the training platform offers an efficient, secure, and cost-effective solution for AI model training in a decentralized setting. The platform's sophisticated features and technical foundations deliver a powerful tool for organizations aiming to leverage AI capabilities while overcoming the constraints of traditional, centralized computing resources.

The first node in the SynonAI network is called "Alpha Node", and it is maintained by the SynonAI team.
In most cases, when using multiple GPUs for training, our scheduler will allocate training processes in a way that minimizes network latency and improves training efficiency.
However, sometimes the "best choice" does not exist. Our schedule will then deploy training processes across nodes.
Page cover image