Cerebrum Node
Cerebrum Nodes represent the workhorses of the decentralized compute network. Each of these is a computing device that lends its computing power for training and executing AI models on the network. The spectrum can go from personal computers to high-power servers, and each will form part of the highly scalable and resilient distributed infrastructure. Nodes contribute significantly to the following:
Model Hosting: Store and serve AI models for inference. Nodes will load the models into memory and, when requests arrive, make them ready for execution. Inferencing: The computation of agents making predictions or decisions from input data involves the execution of model algorithms on input data to come up with an output.
Inferencing: By decentralizing the functions, Cerebrum Cloud eliminates reliance on central cloud providers and cuts down the cost while improving the general efficiency of the platform. The decentralized nature of the network further makes it resistant to failures and attacks, guaranteeing high availability and reliability. Each node is incentivized to add its resources due to a mechanism of rewards, which assures continued growth in health for the network.
By decentralizing the functions, Cerebrum Cloud eliminates reliance on central cloud providers and cuts down the cost while improving the general efficiency of the platform. The decentralized nature of the network further makes it resistant to failures and attacks, guaranteeing high availability and reliability. Each node is incentivized to add its resources due to a mechanism of rewards, which assures continued growth in health for the network.
The Cerebrum Node software currently supports both Linux and Windows operating systems, while support for Mac is around the corner. It will be downloadable as an executable with the full product release and through Cerebrum's Docker account in Beta mode. One can set up his/her nodes for the Beta Release with the step-by-step comprehensive guide given in our "Cerebrum Node Setup and Registration Process" section.
Onboarding Process
During the onboarding process, Cerebrum performs extensive checks to ensure that each node meets our standards, including the following:
Hardware Verification: We make sure that the specified GPUs are operational and meet the specified criteria to perform the tasks put upon the network
Latency Testing: By device geolocation is used to determine device response time for handling requests efficiently.
Verification of each node is done through our validator nodes, which ping the new nodes for operational data necessary for its existence.
Once a machine is online in our network, the Cerebrum Node software ensures it dedicates all its resources exclusively to the network tasks it undertakes. This is important for the integrity and fairness of our rental agreements; it ensures that while a GPU is rented out it is not used for any other tasks that can compromise its performance for the contracted job.
Components
Cerebrum Cloud: The user interface for the public through our bespoke web app.
Cerebrum Node: The user interface for GPU suppliers, through our executable or through docker container.
Cerebrum Validator: The user interface for validator nodes through a dedicated application
Cloud Infrastructure: Hosting of Cerebrum Cloud, Cerebrum Node, and Validator platforms securely.
Validator Nodes: Geographically distributed nodes for the validation of transactions.
Blockchain: A public ledger that contains all the transactions.
Load Balancer/Reverse Proxy: To share the workload for fault tolerance and error checking.
Model Host Supplier Network: A network consisting of individual and supplier-owned compute machines.
Secure VPN Tunnels: Secure channels of communication between components.
Docker Containers: Packages of software that contain GPU access and communication software.
Last updated