The answer is yes. But it’s not a trivial task. I had this question when I was working on a telegram bot with python telegram bot(PTB) framework, while I also want to run a fastapi server using uvicorn. PTB is built on top of asyncio and if you run Application.run_polling() it will block the event loop. So I had to find a way to let both run without blocking.

Option 1: Embed the other asyncio frameworks in one event loop

Actually this is the recommended way to run multiple asyncio (including webserver or another bot), the given example is as follows:

application = ApplicationBuilder().token("TOKEN").build()

async def main():
    await application.initialize()
    await application.start()
    await application.updater.start_{webhook, polling}()
    # Start other asyncio frameworks here
    # Add some logic that keeps the event loop running until you want to shutdown
    # Stop the other asyncio frameworks here
    await application.updater.stop()
    await application.stop()
    await application.shutdown()

more intuitively you can use a context manager like this:

application = ApplicationBuilder().token("TOKEN").build()

async def main():
    async with application:  # Calls `initialize` and `shutdown`
        await application.start()
        await application.updater.start_{webhook, polling}()
        # Start other asyncio frameworks here
        # Add some logic that keeps the event loop running until you want to shutdown
        # Stop the other asyncio frameworks here
        await application.updater.stop()
        await application.stop()

It will ensure that the other asyncio frameworks are properly initialized and shutdown.

We can potentially run the other asyncio frameworks in a separate thread like below:

from concurrent.futures import ThreadPoolExecutor

def run_bot(token):
    bot = ApplicationBuilder().token(token).build()
    asyncio.run(bot.run_polling())

# Using ThreadPoolExecutor to manage the threads
with ThreadPoolExecutor(max_workers=2) as executor:
    # Submit both bots to the thread pool
    future1 = executor.submit(run_bot, "BOT1_TOKEN")
    future2 = executor.submit(run_bot, "BOT2_TOKEN")
    
    # Wait for both futures to complete
    concurrent.futures.wait([future1, future2])

However this is problematic because 1. the event loop is shared between the main thread and the other threads, which can lead to race conditions and other issues. 2. It’ still bound by GIL problem and can’t be fully parallelized.

Option3: Run the other asyncio frameworks in separate processes

Process has its own memory space, so it’s safe to run multiple asyncio frameworks in separate processes. An example is as follows:

import asyncio
from multiprocessing import Process
from telegram.ext import ApplicationBuilder

def run_bot1():
    bot1 = ApplicationBuilder().token("BOT1_TOKEN").build()
    asyncio.run(bot1.run_polling())

def run_bot2():
    bot2 = ApplicationBuilder().token("BOT2_TOKEN").build()
    asyncio.run(bot2.run_polling())

if __name__ == '__main__':
    # Create separate processes for each bot
    process1 = Process(target=run_bot1)
    process2 = Process(target=run_bot2)

    # Start both processes
    process1.start()
    process2.start()

    # Wait for both processes to complete
    process1.join()
    process2.join()

So compared to option2, we don’t have racing problem and they are fully parallelized but we may need significant resources to run multiple asyncio frameworks. and it’s not trival if we want to communicate between processes.

How to choose the best option?

Let’s step back and think about why we want to use asyncio frameworks in the first place. Asyncio is a concurrency model that allows us to run multiple tasks concurrently without blocking the main thread. It is mostly used for IO-bound tasks, such as network requests, database operations, and file I/O. However, if we have CPU-bound tasks running on asyncio, it will block the event loop even if we run them in separate asyncio frameworks(still one event loop).

So let’s look at two scenarios:

  1. We have one bot which mostly call apis and one webserver, and we want to run them concurrently.
  2. We have one bot running the complex computation like image processing locally and one bot running on call llm api, then we have a fastapi server talking to the db and serving the frontend.

We can use option1 to run a bot plus an asyncio webserver both in one event loop. as they are both IO-bound tasks. For the second scenario, we can use some combination of option1 and option3. that is to run the bot and the fastapi server in one event loop, and run the image processing bot in a separate process. So that ends up with option4:

Option4: Hybrid approach

import asyncio
from multiprocessing import Process
from fastapi import FastAPI
from telegram.ext import ApplicationBuilder
import uvicorn

# FastAPI app setup
app = FastAPI()

@app.get("/")
async def root():
    return {"message": "Hello World"}

# Telegram bot that makes API calls
async def run_api_bot(token):
    bot = ApplicationBuilder().token(token).build()
    async with bot:
        await bot.start()
        await bot.updater.start_polling()
        # Keep the bot running
        await asyncio.Event().wait()

# CPU-intensive bot running in separate process
def run_image_processing_bot():
    bot = ApplicationBuilder().token("IMAGE_BOT_TOKEN").build()
    asyncio.run(bot.run_polling())

# Main function running FastAPI and API bot in same event loop
async def main():
    # Start the API bot
    api_bot_task = asyncio.create_task(run_api_bot("API_BOT_TOKEN"))
    
    # Start FastAPI
    config = uvicorn.Config(app, host="0.0.0.0", port=8000, loop="asyncio")
    server = uvicorn.Server(config)
    await server.serve()
    
    # Wait for bot task to complete
    await api_bot_task

if __name__ == '__main__':
    # Start the CPU-intensive bot in a separate process
    image_bot_process = Process(target=run_image_processing_bot)
    image_bot_process.start()
    
    # Run the main async function with FastAPI and API bot
    asyncio.run(main())
    
    # Wait for the image processing bot to complete
    image_bot_process.join()

The hybrid approach shows how to properly initialize and run all components while maintaining clean separation between CPU-bound and IO-bound tasks.

Conclusion

The point of this artical is there is actually no best option. Although asyncio is in most cases the best choice for concurrency, but it’s not always the case as it has some critical limitations like it only uses one core and can’t be fully parallelized. However, if we use multiple processes, we may question ourselves why we design the system in this way do we better have a separate server/pod to do CPU-bound tasks? It’s still a trade-off art like most other software engineering problems.