Skip to main content

Overview

The intelligence layer of the platform we are building is based on 2 main concepts:
  • Data: The highest quality data to have the most accurate, up-to-date information about all relevant market aspects
  • Agents: The autonomous intelligence components that process the data, serve users with insights, and execute tasks
This technology stack allows us to build consistent applications and integrations across different ecosystems, while maintaining a single source of truth and consistency. Let’s dive deeper into each of these concepts:

Agents & Nevron

Behind Neurobro’s intelligence lies a network of specialized “Nevrons” — modular AI agents working in harmony. We have 150+ Nevrons in total, working together to form the intelligence of Neurobro. Each Nevron handles specific tasks, from news analysis to technical evaluations, forming the building blocks of our ecosystem. All Nevrons have the ability to communicate with other Nevrons, which enables complex data analytics, decision-making, research, thinking processes, and much more. This is truly where the magic happens. Additionally, some Nevrons can communicate with end users or perform actions in external systems, such as executing trades, posting tweets, and more. To enable this complex system of Nevrons, we built our own framework called Nevron. The Nevron framework is open source and available on GitHub, with comprehensive technical documentation to help you get started.
Since Neurobro is being constantly updated, the core technologies may differ from the ones described in this section.

Why Nevron?

It’s very important to understand that Nevron is not a general-purpose framework, but a framework specifically designed for building specialized AI agents. Here’s why to use it:
Easy customization for different tasks or workflows through:
  • Modular components
  • Configurable parameters
  • Task-specific optimization
Quick reconfiguration capabilities:
  • Dynamic workflow adjustment
  • Real-time task modification
  • Seamless integration options

Resource Optimization

Optimal computing utilization

Reasoning Power

Enhanced decision-making
Reliable fact-based outputs through:
  • Multi-source verification
  • Error handling
  • Performance monitoring
  • Quality assurance

Shared Resources

Neurobro maintains a unified state across platforms through shared resources. Many different Nevrons work together on the same foundation of data and knowledge.

Dynamic Communication

Real-time agent collaboration

Knowledge Sharing

Centralized information pool

Cross-Platform Sync

Consistent state management

Agent Components

Neurobro appears as a single agent, but from a technical perspective, it is more complex. Neurobro includes multiple components that work together to provide both intelligence and functionality. Here’s an overview of the architecture:
Each platform-specific Neurobro instance maintains its own:
  • Functionality scope
  • Data sources
  • Environmental interaction points
Since the main intelligence is focused on the main product—Neurodex—see there for more details.
Read more about the AI components in the Nevron section.

Large Language Models

LLMs form the backbone of the AI, powering Nevron agents with advanced intelligence capabilities. Different LLMs are used for different purposes:

R1

Specialized long-running reasoning tasks (e.g. internal evaluation of the found alpha)

V3

Fast direct communication with users

Platform-Specific Examples

While Neurobro, as an AI agent that encompasses all the Nevrons, remains a single agent, its presence on different platforms significantly varies. Here are some examples of how Neurobro works across different platforms:
  • The main platform with full coverage of all Nevrons and their capabilities
  • Real-time responses to user questions
  • Automated posting of found signals and analysis
  • “@0xNeurobro” mention monitoring and replying
  • Commenting on threads
  • Automated posting of found signals and analysis
  • Direct messaging with Neurobro in light mode for fast answers and balanced layer of Neurobro intelligence
  • Secure messaging with full e2e encryption via XMTP protocol
  • Seamless trading capabilities with integrated Baseapp XMTP swaps support
To learn more about the Neurobro AI Agents, refer to the Neurodex Agents section, since it’s the main platform with full capabilities of the Neurobro AI Agents.

Data

GIGO = Garbage in, garbage out @ some clever person
AI Agents can only be as smart as the data they are fed. This is why we are building the highest quality data to have the most accurate, up-to-date information about all relevant market aspects. This is the second fundamental part of the intelligence layer of the platform. Here are the main data sources we are using:

Agent Memory

Agent Memory is similar to human memory, but for AI Agents. It consists of opinionated data points about the ecosystem the agent operates in. We currently use a blend of vector stores and graph databases to store this representation of memory.

Qdrant

Primary vector database

Weaviate

Secondary vector store

Neo4j

Primary graph database
A crucial part of the vectors are embeddings. All embeddings are generated using OpenAI’s state-of-the-art text-embedding-3-large model, ensuring:

Precision

High-accuracy matching

Relevance

Context-aware results

Performance

Optimized processing

Onchain Data

We track 3,000+ whales on Base chain and analyze their activity to find the most relevant information about them. First, we store all onchain data for these whales and track their activity in (nearly) real-time. Then we decode the onchain transactions and enrich the data with custom ML labels and technical data (pricing, volume, liquidity, etc.). The biggest part of the quality of the onchain data is our unique ML labeling system, which allows us to classify whales into different categories, track their activity in real-time, perform behavioral analysis, analyze trading patterns, and more. We also use human-in-the-loop techniques to ensure data quality and label accuracy. Unfortunately, raw onchain data is full of errors, failures, mistakes, and scams, so we need to be very careful and precise with the data being stored. Next, the onchain data is analyzed by specialized Nevrons, which provide direct insights to the agent. We also provide access to most of this data for our users on Neurodex.
See the Smart Money Dashboard for more details.

PostgreSQL

Primary relational database

MongoDB

Document database for flexible schemas

Tweets & News & Articles

We track the most relevant news and articles about the crypto market and analyze them to find the most relevant information. We use a combination of RSS feeds and X API integrations to get the most relevant information about the markets. Since this information contains a lot of noise, all data goes through specialized Nevrons, which filter, aggregate, and deduplicate information and maintain the database up to date.

PostgreSQL

Primary relational database

MongoDB

Document database for flexible schemas

Technical Data

A lot of data is enriched with technical data, such as pricing, volume, liquidity, etc. We use a combination of third-party APIs for this, including Coingecko, DexScreener, Base BlockScout, and Alchemy.

Third-party Data

For the most part, Neurobro uses proprietary APIs to get the most accurate and up-to-date information. However, there are also some relevant data sources used as supportive information for the agent’s data. We do not disclose the details of these APIs due to security reasons and competitive advantage.

Proprietary APIs

~90% of visible value

Public APIs

~10% of functionalityExamples: Coingecko, DexScreener, Base BlockScout

Conclusion

The Neurobro ecosystem is built on a foundation of specialized AI agents and high-quality data sources. The combination of a fully proprietary data layer with asymmetrical access to relevant data, plus the AI infrastructure that allows AI to surpass the limitations of LLM technologies, is the key to the success of the Neurobro ecosystem.