Skip to main content

Start an Inference Job

Once you’ve created an app, you can run asynchronous jobs on it by sending a POST request to:

POST /app/<app-id>/run_async

where <app-id> is the id returned when you created your app. This endpoint submits a job to the system, but it does not wait for the job to complete; instead, you’ll get back a response indicating the job was queued or started.

Required Headers

  • x-api-key: A valid API key that authenticates the user.
  • x-team-id: (Optional/Nullable) The ID of the team associated with the request.
  • Content-Type: application/json

Request Body

  • input (object, required): The job data that depends on your model. For FLUX, you might include fields like:
    • prompt (string): Text prompt or instructions.
    • height (number): Desired image height (if the model generates images).
    • width (number): Desired image width.

Below are example commands for running a job asynchronously in Bash and Python:

curl --location 'https://api.rungen.ai/app/<YOUR-APP-ID>/run_async' \
--header 'x-api-key: <YOUR-API-KEY>' \
--header 'x-team-id: <YOUR-TEAM-ID>' \
--header 'Content-Type: application/json' \
--data '{
"input": {
"prompt": "Milan Duomo during a rainy night, a couple walking with their umbrella and they are hand to hand",
"height": 1024,
"width": 1024
}
}'

A response with a unique id will be provided:

{
"data": {
"id": "e9257302-3c4d-47d0-bf90-5be1e4e3b6e2"
}
}

Save the id, we will make use of it in the next chapter to retrieve the status of the job and download the output.