Articles by Victoria

Building My Second Brain on OpenClaw (Part 1)

Part 1 - Intro to OpenClaw + Setting up on Oracle Cloud Free Tier + Connect to Telegram

May 5, 202614 min read
cover

Welcome back to another Articles by Victoria, the place where I randomly write things I'm curious about.

I have been thinking a lot lately about what it actually means to have a personal assistant. One that knows your context, lives alongside your work, and shows up without you having to start from scratch every single time. Something persistent, something that actually grows with you.

Essentially, I want a second brain haha.

That is what led me to OpenClaw.

What Is OpenClaw?

OpenClaw is an open source personal AI assistant that runs on your own server and connects to the messaging apps you already use, like Telegram, Signal, or WhatsApp. You are not renting access to someone else's cloud product. You are running your own instance, with your own data, on your own infrastructure.

The thing that makes OpenClaw different from most AI tools is persistence. Most chatbots are stateless. Every conversation starts from zero. OpenClaw maintains memory across sessions. It can run scheduled tasks, respond to messages while you are asleep, and take actions on your behalf using tools called skills.

The way I think about it: it is less like a chatbot and more like a second brain you can actually talk to.

It started as a weekend project by Peter Steinberger, originally a WhatsApp relay script. Within eight weeks it had over 180,000 GitHub stars. That trajectory alone tells you something real is happening here.

How OpenClaw Works Under the Hood

Before setting anything up, I wanted to understand what I was actually deploying. Here is a simplified breakdown of the architecture.

Channel Adapters

When you send a message to your assistant through Telegram, that message does not go directly to an AI model. It first hits a channel adapter. The adapter handles authentication, parses the inbound message into a standard format, enforces access control, and formats outbound replies back to the platform.

This is why OpenClaw can connect to multiple messaging platforms without you needing to rewrite anything. The adapter layer abstracts all of that away.

Gateway Control Plane

The Gateway is the core orchestrator. It receives the normalized message, decides which session it belongs to, manages the queue of work, and coordinates everything that happens next. Think of it as the traffic control layer that sits between your messages and the AI.

Agent Runtime

This is where the actual thinking happens. The agent runtime goes through a few stages every time it processes a message.

First, session resolution. It figures out which conversation this belongs to and loads the relevant history.

Second, context assembly. It pulls together your workspace files, memory, recent conversation history, and any tools available to the agent.

Third, the execution loop. The model runs, decides if it needs to use a tool, uses it, checks the result, and continues until it has a final response to send back.

The system prompt architecture is also worth understanding. OpenClaw builds your system prompt dynamically from files in your workspace. Files like SOUL.md (your personality), MEMORY.md (long-term memory), USER.md (context about you), and AGENTS.md (behavioral rules) all get assembled into the context the model sees. This is how the assistant actually knows who it is talking to and how to behave.

Skills

Skills are the tools the agent can use. They are modular, installable extensions that let your assistant do things beyond conversation: checking your calendar, reading emails, running code, querying APIs. Each skill comes with a SKILL.md file that tells the agent how and when to use it.

This is the part that makes OpenClaw feel like a second brain rather than a chat window. The more skills you add, the more capable it becomes.

Here's the architecture diagram overall (using PlantUML to keep it lightweight instead of generating images):

Setting Up OpenClaw on Oracle Cloud Free Tier

Here is the practical part. I wanted to host this 24/7 without paying for it, and without running it on my personal machine. Oracle Cloud's Always Free tier turned out to be the best option by a significant margin.

Most free tiers give you 1GB of RAM. OpenClaw recommends at least 2GB. Oracle gives you 4 ARM CPU cores, 24GB of RAM, and 200GB of storage, permanently and for free.

The tradeoff is that the setup is more involved than a one-click deploy. Here is how I did it.

Special thanks to this article which I used as a useful reference

Step 1: Create Your Oracle Cloud Account

Go to oracle.com/cloud/free and sign up. You will need to provide a credit card for identity verification, but you will not be charged as long as you stay within the Always Free resources.

During signup, you will choose a home region. This cannot be changed later, so pick a region close to you geographically. Note that popular regions sometimes have limited free-tier capacity, so if your first choice shows unavailability, try a nearby alternative.

When adding a payment method, make sure the card's billing address is the same as your email's home address. There were reports that Oracle would flag your account as sus if the addresses don't match.

Step 2: Create a Compartment

Compartments are how Oracle organizes resources. Rather than putting everything in the root compartment, create a dedicated one for your OpenClaw setup. I called mine Clawbot.

Navigate to Identity and Security, then Compartments, and create a new one. From here on, select this compartment whenever Oracle asks you to choose one. You should see 1 root and 1 Compartment like this.

Step 3: Create the Network Before Anything Else

This is the step most people skip and then get stuck on. Create your Virtual Cloud Network (VCN) before you create your compute instance. If you try to set up networking during instance creation instead, the option to assign a public IP gets grayed out, and you will not be able to SSH into your server.

Go to Networking, then Overview, and find the option to create a VCN with internet connectivity.

Run the VCN wizard. Give it a name, select your compartment, and leave the IP range defaults if they are pre-filled. The wizard creates everything you need in one shot: a public subnet, internet gateway, routing rules, and SSH access.

Wait for the confirmation that your VCN is available before moving on.

Step 4: Create the Compute Instance

Now you can create the server itself. Go to Home > Instances and click Create Instance.

For the image, click Change Image and choose Canonical Ubuntu 24.04 Minimal aarch64.

Ubuntu has the best community documentation for this kind of setup, 24.04 is supported until 2029, and the minimal image keeps things lean and secure.

For the shape, it should automatically select the correct VM.Standard.A1.Flex under the Ampere (ARM) section after you changed your image. If not, click Change shape to do it manually.

Look for the Always Free-eligible badge. Make sure Shape build is set as 1 core OCPU and 6GB of RAM. That is 3x the recommended memory for OpenClaw and still leaves the rest of your free-tier allocation available for other projects.

Leave Security section as default.

For networking, select the VCN you just created and make sure you are using the public subnet. This is why creating the VCN first matters. Make sure subnet is "Public".

Under Private IPv4 address assignment, make sure it is selected and toggled as "Automatically assign public IPv4 address".

Next, generate a SSH key pair. Download the private and public keys and keep it somewhere safe. This is how you will connect to your server.

Leave Storage as default configuration. Click Create and wait for the instance to show a Running status.

If you cannot Create Instance

This is a common issue when creating an instance. You need to make sure you are in Pay As You Go tier. Go to Billing → Upgrade and Manage Payment, then add your card to upgrade your tier. You won't have to pay anything as long as you use Free Tier products.

Step 5: Configure SSH Access

First, move your keys to your ssh directory:

mv ~/Downloads/ssh-key-*.key ~/.ssh/oracle-cloud.key
mv ~/Downloads/ssh-key-*.pub ~/.ssh/oracle-cloud.key.pub

# Set correct permissions (SSH requires this)
chmod 400 ~/.ssh/oracle-cloud.key

Once your instance is running, copy the public IP address from the instance details page. Connect from your terminal:

ssh -i your-private-key.key ubuntu@your-public-ip

If you get a connection refused error, go back to your VCN's security list and verify that port 22 is open for inbound traffic. Oracle's VCN wizard handles this by default, but it is worth checking.

Step 6: Install OpenClaw

Once you are connected, update your packages first:

sudo apt update && sudo apt upgrade -y

Then install OpenClaw with the command, following the official documentation at docs.openclaw.ai.

curl -fsSL https://openclaw.ai/install.sh | bash

The installation process walks you through setting up your workspace, connecting your first messaging channel, and configuring your AI model provider. Verify installation:

openclaw --version
node --version 

Step 7: Select Model Provider

Now here's the part you need to decide yourself: the model provider. The model is your agent's brain. So choosing how much time you want to save yourself from troubleshooting is important.

For me, I choose GitHub Copilot as my model provider. It has access to models from Anthropic, Google, OpenAI, and more so I can experiment and switch models based on task complexity and use cases.

Within about 20 minutes I had a working assistant I could message directly.

Step 8: Connect Telegram

This is where OpenClaw actually becomes usable from your phone and you will be able to talk to it directly from Telegram or other chatting apps. For me, I decided to go with Telegram.

You will need to create a Telegram bot and wire it into your OpenClaw config.

Create a bot with BotFather

Open Telegram and search for @BotFather. Make sure the handle is exactly that, as there are impersonators. Send /newbot, follow the prompts, and save the bot token it gives you. You will only see it once, so store it somewhere safe immediately.

While you are in BotFather, there are two settings worth configuring right away:

/setprivacy controls whether your bot can see all messages in a group or only messages that directly mention it. By default, Telegram enables Privacy Mode, which means the bot misses most group conversation. If you want it to be fully present in a group, you can disable privacy mode here. The other option, which I prefer, is to make the bot a group Admin instead, which also grants full message visibility without disabling privacy mode globally.

/setjoingroups controls whether anyone can add your bot to groups. Since this is a personal assistant, you probably want this set to disabled so strangers cannot add your bot to their groups.

Configure OpenClaw

Open your OpenClaw config file and add the Telegram channel settings:

{
  channels: {
    telegram: {
      enabled: true,
      botToken: "YOUR_BOT_TOKEN_HERE",
      dmPolicy: "allowlist",
      allowFrom: ["YOUR_NUMERIC_TELEGRAM_USER_ID"],
      groups: {
        "*": { requireMention: true }
      },
    },
  },
}

A few things to understand here before you just copy and paste this.

dmPolicy is the access control setting for direct messages. The default is pairing, which means anyone who finds your bot can send it a pairing request and get access after you approve it. For a personal assistant that has access to your files, calendar, and potentially financial data, allowlist is the safer choice. It means only numeric Telegram user IDs listed in allowFrom can message the bot at all.

allowFrom takes numeric user IDs, not usernames. Usernames can change. Numeric IDs cannot. To find your numeric ID, you can message a bot like @userinfobot on Telegram and it will return your ID. Put that number in the list.

If you set dmPolicy to open with allowFrom: ["*"], anyone who finds or guesses your bot username can command it. For a personal bot with access to your data and tools, this is a significant risk. Stick to allowlist with explicit IDs.

requireMention: true under the groups config means the bot will only respond in group chats when it is directly @mentioned. This prevents it from responding to every single message in a group, which gets noisy quickly and also leaks private context into a shared space unnecessarily.

Add the bot to a group (optional)

If you want to use your assistant inside a Telegram group, add the bot the same way you would add any contact. Once added, go into the group settings and promote the bot to Admin. This is what gives it full message visibility. Without admin status and with privacy mode still on, the bot can only see messages that explicitly mention it.

After changing privacy mode in BotFather or after adding the bot as admin, remove the bot from the group and re-add it. Telegram only applies permission changes when the bot rejoins.

Approve the first DM pairing

Start the gateway:

openclaw gateway

Then send your bot a message from Telegram. Even on allowlist mode, the first time you message it a pairing code gets generated. Approve it from your server:

openclaw pairing list telegram
openclaw pairing approve telegram <CODE>

Pairing codes expire after one hour. Once approved, your numeric ID is registered and you have full access.

A note on security

Your OpenClaw instance has access to whatever you give it: your files, your workspace, your credentials for other services. The access control config is not a nice-to-have. It is the boundary between a useful personal tool and an exposed server anyone could interact with. Use allowlist, use numeric IDs, and keep requireMention on in groups. If you ever share your bot token accidentally, regenerate it immediately in BotFather with /revoke.

What I Learned From Setting This Up

The biggest insight is that OpenClaw is not just a chatbot with more features. The architecture is genuinely different. Because it runs persistently, has structured memory, and operates through a workspace of files you control, it behaves more like infrastructure than an app.

The Oracle Cloud setup is worth the extra configuration steps. Running your assistant on a server you do not pay for, with resources that do not expire, changes how you think about what you can automate and offload.

If you got any errors when setting up Oracle, you can check this article for the solutions. It took me a while to troubleshoot so this article was a time saver!

The model is really your agent's brain! I learned the hard way being so cheap haha.

A lot of issues you will encounter when setting up workflows, tools, etc. will be caused by how "smart" your agent is. Choose a bad model and they will write "garbage" to your files, fake that they have completed your tasks and call the wrong tools for the wrong tasks... and so much more headaches.

Conclusion

OpenClaw gave me something I did not know I was missing: a persistent, capable assistant that knows my context without me having to re-explain it every session. Hosting it on Oracle Cloud's free tier means it runs around the clock without any ongoing cost. The setup takes an afternoon, but what you get at the end is an assistant that actually grows with you.

If you are someone who has thought about building a personal AI setup but felt like the existing tools were either too limited or too expensive to run, this is worth looking at seriously.

Stay tuned for Part 2, where I will be connecting the gog skill to wire up Gmail, Google Calendar, and Google Drive, turning the assistant from a general helper into something that actually works with my day-to-day workflows.

Thanks for reading! I am curious to know your own personal thoughts and experiences on this topic! Feel free to connect, send me an email (my inbox is always open) or let me know in the comments! Cheers!

Let's Connect!

Share:

More from AI, but make it make sense

View full series →

More Articles