Cerebrum
X/TwitterTelegramBlogGithub
  • OVERVIEW
    • Abstract
    • The Problem
    • Roadmap
    • The Cerebrum Solution
  • Product Overview
    • Cerebrum Agent
    • Cerebrum Digital Twin
    • Bring Your Own Agent (BYOA)
    • Cerebrum Cloud
    • Cerebrum Node
    • Cerebrum Validators
    • Cerebrum Node Dashboard
  • Network
    • Stakeholders and Roles
  • TECHNOLOGY STACK
    • Decentralized Compute Network
    • Security and Privacy Infrastructure
    • Smart Contracts
    • Distributed with Diverse Hardware Support
    • Optimization
    • Quality of Service (QoS)
    • Reputation System
  • System Architecture
    • Overview
    • Agent Building Blocks
    • Cerebrum Node
    • Cerebrum Ecosystem Architecture
    • API and Integration Capabilities
  • CEREBRUM NODE SETUP & REGISTRATION
    • 1. Introduction
    • 2. Docker Installation
    • 3. Docker Desktop
    • 4. Windows Powershell
    • 5. Pull the Docker Image
    • 6. Register a Cerebrum Worker
  • Practical Use Cases
    • Use Cases
    • Blue-Chip Companies
    • SMEs
    • Individuals with High-Value Brands
  • Cerebrum Ecosystem
    • Earning and Rewards Mechanism
    • Business Model
  • Cerebrum Foundation
    • Cerebrum Foundation
    • Volunteer Computing
  • Legal
    • Terms of Use
    • Privacy Policy
Powered by GitBook
On this page
  1. System Architecture

Cerebrum Node

Cerebrum Nodes represent the workhorses of the decentralized compute network. Each of these is a computing device that lends its computing power for training and executing AI models on the network. The spectrum can go from personal computers to high-power servers, and each will form part of the highly scalable and resilient distributed infrastructure. Nodes contribute significantly to the following:

  • Model Hosting: Store and serve AI models for inference. Nodes will load the models into memory and, when requests arrive, make them ready for execution. Inferencing: The computation of agents making predictions or decisions from input data involves the execution of model algorithms on input data to come up with an output.

  • Inferencing: By decentralizing the functions, Cerebrum Cloud eliminates reliance on central cloud providers and cuts down the cost while improving the general efficiency of the platform. The decentralized nature of the network further makes it resistant to failures and attacks, guaranteeing high availability and reliability. Each node is incentivized to add its resources due to a mechanism of rewards, which assures continued growth in health for the network.

By decentralizing the functions, Cerebrum Cloud eliminates reliance on central cloud providers and cuts down the cost while improving the general efficiency of the platform. The decentralized nature of the network further makes it resistant to failures and attacks, guaranteeing high availability and reliability. Each node is incentivized to add its resources due to a mechanism of rewards, which assures continued growth in health for the network.

The Cerebrum Node software currently supports both Linux and Windows operating systems, while support for Mac is around the corner. It will be downloadable as an executable with the full product release and through Cerebrum's Docker account in Beta mode. One can set up his/her nodes for the Beta Release with the step-by-step comprehensive guide given in our "Cerebrum Node Setup and Registration Process" section.

Onboarding Process

During the onboarding process, Cerebrum performs extensive checks to ensure that each node meets our standards, including the following:

  • Hardware Verification: We make sure that the specified GPUs are operational and meet the specified criteria to perform the tasks put upon the network

  • Latency Testing: By device geolocation is used to determine device response time for handling requests efficiently.

Verification of each node is done through our validator nodes, which ping the new nodes for operational data necessary for its existence.

Once a machine is online in our network, the Cerebrum Node software ensures it dedicates all its resources exclusively to the network tasks it undertakes. This is important for the integrity and fairness of our rental agreements; it ensures that while a GPU is rented out it is not used for any other tasks that can compromise its performance for the contracted job.

Components

  • Cerebrum Cloud: The user interface for the public through our bespoke web app.

  • Cerebrum Node: The user interface for GPU suppliers, through our executable or through docker container.

  • Cerebrum Validator: The user interface for validator nodes through a dedicated application

  • Cloud Infrastructure: Hosting of Cerebrum Cloud, Cerebrum Node, and Validator platforms securely.

  • Validator Nodes: Geographically distributed nodes for the validation of transactions.

  • Blockchain: A public ledger that contains all the transactions.

  • Load Balancer/Reverse Proxy: To share the workload for fault tolerance and error checking.

  • Model Host Supplier Network: A network consisting of individual and supplier-owned compute machines.

  • Secure VPN Tunnels: Secure channels of communication between components.

  • Docker Containers: Packages of software that contain GPU access and communication software.

PreviousAgent Building BlocksNextCerebrum Ecosystem Architecture

Last updated 4 months ago