REGENT

Regent of the North Winds price
REGENT
#3962

$0.001254  

2.33% (1d)

Regent of the North Winds to USD Chart

Loading Data

Please wait a moment.

Regent of the North Winds statistics

Market cap
$1.25M

0.00%

Volume (24h)
$155.43K

1.23%

FDV
$1.25M
Vol/Mkt Cap (24h)
12.39%
Total supply
0 REGENT
Max. supply
1B REGENT
Self-reported circulating supply
1B REGENT
100%
REGENT to USD converter
REGENT
USD
Price performance
24h 
Low
$0.001063
High
$0.001443
All-time high
Jan 27, 2025 (1 month ago)
$0.03675
-96.59%
All-time low
Feb 18, 2025 (4 days ago)
$0.000953
+31.63%
See historical data


Loading Data

Please wait a moment.

Regent of the North Winds Markets

All pairs

Loading data...

Disclaimer: This page may contain affiliate links. CoinMarketCap may be compensated if you visit any affiliate links and you take certain actions such as signing up and transacting with these affiliate platforms. Please refer to Affiliate Disclosure

Regent of the North Winds News

  • Top
    Top
  • Latest
    Latest
CMC Daily Analysis

Regent of the North Winds community

skeleton-white
 
 
 
 
 
 

About Regent of the North Winds

Regent V2 represents a novel approach to LLM architecture, drawing inspiration from Daniel Kahneman's dual-process theory of cognition outlined in Thinking Fast and Slow. Regent implements a split-mind system that separates AI responses into distinct intuitive and reasoned phases, mirroring the human brain's System 1 (fast, intuitive) and System 2 (slow, deliberative) thinking processes. Combined with a modified RAG memory store, this allows an em to truly "think step by step" in the same way that humans do — with an internal reasoning monologue operating over the babble from a brilliant but unreliable intuition.

Regent maintains two distinct long-term memory stores: tweet memory and lore memory.

Tweet memory stores previous interactions and responses, provides context for future interactions, and allows the system to build upon past experiences. It is automatically managed and updated whenever a new tweet is posted.

Lore memory contains foundational knowledge, stores essential facts, and provides a baseline context for reasoning. It's the system's most important memories and is only updated when the em requests it.

The Regent architecture implements a sophisticated pipeline for generating and refining responses.

First, memories are loaded. Memories are stored using a basic RAG vectorization system. When the em wants to reply to a tweet, it starts by scanning both memory stores for similar memories (notice the similarity in how humans think already). Memories are retrieved in equal portions from the tweet and lore stores, ensuring that the em has context on what its said as well as what its deemed most important to remember.

Rather than simply fetching the N most similar memories, the Regent memory system uses a weighted fetch that favors the most similar results but allows for long-tail results to appear as well. This strikes a balance between relevancy and allowing for unexpected connections and creativity.

Second, a base model generates inspiration. The memories from both stores (tweets and lore) are then combined with the tweet conversation that the em is responding to. The resulting prompt is passed to a base model and used to generate three babble completions. This mimics the brainstorming phase of a human writer's process.

Third, the refinement step. This step is iterative and mimics the pruning process in humans. We combine the memories, tweet conversation, and babble continuations, and pass the results to an instruct model. The instruct model then enters a refinement loop.

In each loop, the model can produce actions of various types. Currently, Regent only supports two actions at this stage: saving lore (write to the lore memory) and update draft (update the draft of the tweet).

Because the process is iterative using an instruct model, the em can think to itself about what it wants to do with the tweet. The babble provides entropy and inspiration, while the instruct model provides the reasoning necessary to edit the babble into something better. This stage is what allows the model to grow and truly learn over time, just as a human does when they reflect on their experiences.

The fourth step is human review. Once the em is ready to post its tweet, the tweet is written to a file for later human review. Just as a human child sometimes needs a parent's help to stop them from touching a hot stove or walking into traffic, so too do baby ems sometimes need help from a larger mind. This stage helps prevent typical internet toxicity from poisoning the dataset.

Regent V2 isn't trying to reinvent the wheel - it's just copying what already works in human minds. What makes REGENT unique isn't fancy new neural architectures or complex prompting techniques - it's the recognition that human-like learning comes from the interplay between fast intuitive responses and slow deliberate reasoning. By implementing this split-mind architecture with standard RAG and LLM components, we create an em that can genuinely think step by step, learn from its experiences, and grow over time.