Hardware
Troubleshooting
FAQ
Frequently Asked Questions
We have a Getting Started guide that will help you get up and running with the 01.
We have a Connecting guide that will help you get up and running with the 01.
We are working on supporting this, but we only support server-side code execution right now.
We recommend running --profiles
, duplicating a profile, then experimenting
with the settings in the profile file (like system_message
).
The server runs on your home computer, or whichever device you want to control.
You might need to re-install the Poetry environment. In the software
directory, please run poetry env remove --all
followed by poetry install
We are working on supporting external devices to the desktop app, but for now the 01 will need to connect to the Python server.
We are working on building this feature, but it isn’t avaliable yet.
We support --tunnel-service bore
and --tunnel-service localtunnel
in
addition to --tunnel-service ngrok
. [link to tunnel service docs]
If you use --profile local
, you won’t need to use an LLM via an API. The 01
server will be responsible for LLM running, but you can run the server +
client on the same device (simply run poetry run 01
to test this.)
We have found gpt-4-turbo
to be the best, but we expect Claude Sonnet 1.5 to
be comparable or better.
If you use --profile local
, you don’t need to. For hosted language models,
you may need to pay a monthly subscription.
The computer does need to be running, and will not wake up if a request is sent while it’s sleeping.
The 01 defaults to gpt-4-turbo
.
We are exploring a few options about how to best provide a stand-alone device connected to a virtual computer in the cloud, provided by Open Interpreter. There will be an announcement once we have figured out the right way to do it. But the idea is that it functions with the same capabilities as the demo, just controlling a computer in the cloud, not the one on your desk at home.
We are figuring out the best way to activate the community to build the next phase. For now, you can read over the Repository https://github.com/OpenInterpreter/01 and join the Discord https://discord.gg/Hvz9Axh84z to find and discuss ways to start contributing to the open-source 01 Project!
The official app is being developed, and you can find instructions for how to set it up and contribute to development here: https://github.com/OpenInterpreter/01/tree/main/software/source/clients/mobile Please also join the Discord https://discord.gg/Hvz9Axh84z to find and discuss ways to start contributing to the open-source 01 Project!
If you use ngrok when running --expose
on the 01
Repo, that is the only third party
involved. You can also use bore and localtunnel — these are used to expose
your 01 server over the internet for your device to connect to. If you run it
locally, and connect to the server via something like 0.0.0.0, no third
parties are involved.
This depends on which api_base / llm service provider you use, but re the recommended gpt models: “OpenAI encrypts all data at rest (AES-256) and in transit (TLS 1.2+)“. This will be different for Anthropic, Ollama, etc. but I’d expect all large providers to have the same encryption standards.