Core Technologies
Technical infrastructure powering the Neurobro ecosystem
Agent Components
Neurobro is present as a single agent, but from the technical perspective he is more complex. Neurobro includes multiple components that work together to provide both intelligence and functionality. Here is the overview and architecture note:
Each platform-specific Neurobro instance maintains its own:
- Functionality scope
- Data sources
- Environmental interaction points
Each platform-specific Neurobro instance maintains its own:
- Functionality scope
- Data sources
- Environmental interaction points
The central instance of Neurobro present across all platforms has the following components:
Knowledge Base
Advanced RAG systems
Embeddings
SOTA models
APIs
90% proprietary & 10% public
Knowledge Base
Embeddings
All embeddings are generated using OpenAI’s state-of-the-art text-embedding-3-large model, ensuring:
Precision
High-accuracy matching
Relevance
Context-aware results
Performance
Optimized processing
API Integration
For the most part, Neurobro uses proprietary APIs to get the most accurate and up-to-date information.
Proprietary APIs
~90% of visible value
Public APIs
~10% of functionality
Examples: Coingecko, DexScreener, Base BlockScout
Large Language Models
LLMs form the backbone of the AI, powering Neurobro with advanced intelligence capabilities.
Different LLMs are used for different purposes:
R1
Specialized long-running reasoning tasks (e.g. internal evaluation of the found alpha)
V3
Fast direct communication with users
R1
Specialized long-running reasoning tasks (e.g. internal evaluation of the found alpha)
V3
Fast direct communication with users
Llama 4
Simple modular tasks (e.g. summarization, translation, etc.)
GPT-4o
Base model for nevron orchestration and function calling
o3 Series
Advanced reasoning tasks (e.g. internal evaluation of the found alpha)
Gemini Pro
Large context processing (e.g. summarization of daily interactions)
Fine-Tuned Models
- Personality alignment
- Writing tasks
Tools & Workflows
Each Nevron uses specialized tools and workflows to deliver optimal performance:
Tool Identification
Agent analyzes context and determines required functionality
Tool Execution
Selected tools represented by Nevrons process data and perform specific operations
Response Generation
Results are synthesized into coherent, contextually appropriate output
Feedback Integration
System learns from interaction outcomes to improve future performance
Neurobro itself uses a set of nevrons to perform complex tasks: chatting, analysis, research, etc.
Platform-Specific Examples
While Neurobro remains a single agent, his presece on different platforms significantly varies. Here are some examples of how Neurobro works on different platforms:
Want to learn more about our tool development? Check out the Nevron documentation.