diff --git a/README_EN.md b/README_EN.md
new file mode 100644
index 0000000..832a61a
--- /dev/null
+++ b/README_EN.md
@@ -0,0 +1,123 @@
+
+
+

+
autoMate
+
+An Open Source Development Platform for Agent+RPA.
+
+[![][issues-helper-image]][issues-helper-url] [![Issues need help][help-wanted-image]][help-wanted-url]
+
+
+π[Documentations](https://s0soyusc93k.feishu.cn/wiki/JhhIwAUXJiBHG9kmt3YcXisWnec?from=from_copylink)|ποΈ[Introduction Video](https://www.bilibili.com/video/BV1LW421R7Ai/?share_source=copy_web&vd_source=c28e503b050f016c21660b69e391d391)|π¨[QQ Channel](https://pd.qq.com/s/1ygylejjb)
+
+
+
+[issues-helper-image]: https://img.shields.io/badge/using-actions--cool-blue?style=flat-square
+[issues-helper-url]: https://github.com/actions-cool
+[help-wanted-image]: https://flat.badgen.net/github/label-issues/yuruotong1/autoMate/enhancement/open
+[help-wanted-url]: https://github.com/yuruotong1/autoMate/labels/enhancement
+
+
+
+[](https://ant.design)
+
+
+## β¨ Features
+
+- π Generate automation code by chatting.
+- π Run automation code with one click from quick search.
+- π¦ Comprehensive automation toolkit
+- βοΈ Integrated framework and tools for automation development.
+- π₯³ Compatible with all online and local LLMs.
+
+## π₯ Enviroment
+
+- LLM APIs formatted by OpenAI.
+- Refer to the following LiteLLM configuration for detailed information:
+
+| LLMs | [Completion](https://docs.litellm.ai/docs/#basic-usage) | [Streaming](https://docs.litellm.ai/docs/completion/stream#streaming-responses) | [Async Completion](https://docs.litellm.ai/docs/completion/stream#async-completion) | [Async Streaming](https://docs.litellm.ai/docs/completion/stream#async-streaming) | [Async Embedding](https://docs.litellm.ai/docs/embedding/supported_embedding) | [Async Image Generation](https://docs.litellm.ai/docs/image_generation) |
+|-------------------------------------------------------------------------------------|---------------------------------------------------------|---------------------------------------------------------------------------------|-------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------|-------------------------------------------------------------------------------|-------------------------------------------------------------------------|
+| [openai](https://docs.litellm.ai/docs/providers/openai) | β
| β
| β
| β
| β
| β
|
+| [azure](https://docs.litellm.ai/docs/providers/azure) | β
| β
| β
| β
| β
| β
|
+| [aws - sagemaker](https://docs.litellm.ai/docs/providers/aws_sagemaker) | β
| β
| β
| β
| β
| |
+| [aws - bedrock](https://docs.litellm.ai/docs/providers/bedrock) | β
| β
| β
| β
| β
| |
+| [google - vertex_ai](https://docs.litellm.ai/docs/providers/vertex) | β
| β
| β
| β
| β
| β
|
+| [google - palm](https://docs.litellm.ai/docs/providers/palm) | β
| β
| β
| β
| | |
+| [google AI Studio - gemini](https://docs.litellm.ai/docs/providers/gemini) | β
| β
| β
| β
| | |
+| [mistral ai api](https://docs.litellm.ai/docs/providers/mistral) | β
| β
| β
| β
| β
| |
+| [cloudflare AI Workers](https://docs.litellm.ai/docs/providers/cloudflare_workers) | β
| β
| β
| β
| | |
+| [cohere](https://docs.litellm.ai/docs/providers/cohere) | β
| β
| β
| β
| β
| |
+| [anthropic](https://docs.litellm.ai/docs/providers/anthropic) | β
| β
| β
| β
| | |
+| [huggingface](https://docs.litellm.ai/docs/providers/huggingface) | β
| β
| β
| β
| β
| |
+| [replicate](https://docs.litellm.ai/docs/providers/replicate) | β
| β
| β
| β
| | |
+| [together_ai](https://docs.litellm.ai/docs/providers/togetherai) | β
| β
| β
| β
| | |
+| [openrouter](https://docs.litellm.ai/docs/providers/openrouter) | β
| β
| β
| β
| | |
+| [ai21](https://docs.litellm.ai/docs/providers/ai21) | β
| β
| β
| β
| | |
+| [baseten](https://docs.litellm.ai/docs/providers/baseten) | β
| β
| β
| β
| | |
+| [vllm](https://docs.litellm.ai/docs/providers/vllm) | β
| β
| β
| β
| | |
+| [nlp_cloud](https://docs.litellm.ai/docs/providers/nlp_cloud) | β
| β
| β
| β
| | |
+| [aleph alpha](https://docs.litellm.ai/docs/providers/aleph_alpha) | β
| β
| β
| β
| | |
+| [petals](https://docs.litellm.ai/docs/providers/petals) | β
| β
| β
| β
| | |
+| [ollama](https://docs.litellm.ai/docs/providers/ollama) | β
| β
| β
| β
| β
| |
+| [deepinfra](https://docs.litellm.ai/docs/providers/deepinfra) | β
| β
| β
| β
| | |
+| [perplexity-ai](https://docs.litellm.ai/docs/providers/perplexity) | β
| β
| β
| β
| | |
+| [Groq AI](https://docs.litellm.ai/docs/providers/groq) | β
| β
| β
| β
| | |
+| [Deepseek](https://docs.litellm.ai/docs/providers/deepseek) | β
| β
| β
| β
| | |
+| [anyscale](https://docs.litellm.ai/docs/providers/anyscale) | β
| β
| β
| β
| | |
+| [IBM - watsonx.ai](https://docs.litellm.ai/docs/providers/watsonx) | β
| β
| β
| β
| β
| |
+| [voyage ai](https://docs.litellm.ai/docs/providers/voyage) | | | | | β
| |
+| [xinference [Xorbits Inference]](https://docs.litellm.ai/docs/providers/xinference) | | | | | β
| |
+| [FriendliAI](https://docs.litellm.ai/docs/providers/friendliai) | β
| β
| β
| β
| | |
+
+
+## π Related Links
+
+
+- [Basic Features](https://s0soyusc93k.feishu.cn/wiki/JhhIwAUXJiBHG9kmt3YcXisWnec#O9W8dEqfBo13oQxCslycFUWonFd)
+
+- [Project Overview](https://s0soyusc93k.feishu.cn/wiki/SR9ywLMZmin7gakGo21cnyaFnRf?from=from_copylink)
+
+## π¬ Quick Start
+
+Download the latest version from the release and double-click to run directly; no dependencies are required.
+
+## β¨οΈ Local LLM Application Development
+
+This project is divided into two parts: front-end and back-end. The front-end project is in the `app` directory, and the back-end project is in the `server` directory. This means that to run autoMate, you need to start both the front-end and back-end simultaneously. The project will create an SQLite database autoMate.db in the home directory. To view the database contents, we recommend using the open-source database software `DBeaver`.
+
+### Initiate the Front-End
+
+1. Install Node.js (version v18.x is required).
+2. Navigate to the app directory using the command line.
+3. Run `npm install` to install dependencies.
+4. Run `npm run dev` to initiate the front-end
+
+### Initiate the Back-EndοΌ
+
+1. Install Python 3, preferably version 3.9+.
+2. Navigate to the `server` directory using the command line.
+3. Create and activate virtual env, and run `python -m venv .venv`.
+4. Run `pip install -r requirements.txt` to install the required dependencies.
+5. Run `flask --app main run` to start the back-end.
+
+### Packaging
+
+Back-end packaging command:
+
+`pyinstaller main.spec`
+
+Front-end packaging command:
+
+`npm run build:win`
+
+After packaging, place `main.exe` in the front-end root directory.
+
+## π€ Collaborations
+
+Please refer to [Contribution Guidance](https://s0soyusc93k.feishu.cn/wiki/ZE7KwtRweicLbNkHSdMcBMTxngg?from=from_copylink).
+
+> Highly recommended reading [HOW TO ASK QUESTIONS THE SMART WAY](https://github.com/ryanhanwu/How-To-Ask-Questions-The-Smart-Way)γ[HOW TO ASK QUESTIONS TO OPEN SOURCE COMMUNITY](https://github.com/seajs/seajs/issues/545) ε [HOW TO REPORT BUGS EFFICIENTLY](http://www.chiark.greenend.org.uk/%7Esgtatham/bugs-cn.html)γ[HOW TO SUBMIT A GOOD ISSUE TO OPEN SOURCE PROJECTS](https://zhuanlan.zhihu.com/p/25795393). Better questions are more likely to get help.
+
+
+
+
\ No newline at end of file