Xano
Actions (20)
A helper function that creates a 1-click MCP install link for Cursor using URL-based authentication with Xano MCP Servers. This function generates deeplinks and various markup formats (Markdown, HTML, JSX) that allow users to install MCP servers directly into Cursor with a single click.
Input Parameters
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| mcpserverurl | string | Yes | The full URL endpoint of the Xano MCP server |
| server_name | string | Yes | Display name for the MCP server |
| server_type | enum | Yes | Type of MCP server connection |
Example Input
`json
{
"mcpserverurl": "https://x123-wu0q-dtak.n7.xano.io/x2/mcp/6vi_VA6-/mcp/sse",
"server_name": "Xano MCP Server Name",
"server_type": "sse"
}
`
Example Output
`json
{
"deeplink": "cursor://anysphere.cursor-deeplink/mcp/install?name=Xano MCP Server Name&config=eyJ0eXBlIjoic3NlIiwidXJsIjoiaHR0cHM6Ly94MTIzLXd1MHEtZHRhay5uNy54YW5vLmlvL3gyL21jcC82dmlfVkE2LS9tY3Avc3NlIn0=",
"markdown": {
"dark": "",
"light": ""
},
"html": {
"dark": "<a href=\"cursor://anysphere.cursor-deeplink/mcp/install?name=Xano MCP Server Name&config=eyJ0eXBlIjoic3NlIiwidXJsIjoiaHR0cHM6Ly94MTIzLXd1MHEtZHRhay5uNy54YW5vLmlvL3gyL21jcC82dmlfVkE2LS9tY3Avc3NlIn0=\"><img src=\"https://cursor.com/deeplink/mcp-install-dark.svg\" alt=\"Add Xano MCP Server Name MCP server to Cursor\" height=\"32\" /></a>",
"light": "<a href=\"cursor://anysphere.cursor-deeplink/mcp/install?name=Xano MCP Server Name&config=eyJ0eXBlIjoic3NlIiwidXJsIjoiaHR0cHM6Ly94MTIzLXd1MHEtZHRhay5uNy54YW5vLmlvL3gyL21jcC82dmlfVkE2LS9tY3Avc3NlIn0=\"><img src=\"https://cursor.com/deeplink/mcp-install-light.svg\" alt=\"Add Xano MCP Server Name MCP server to Cursor\" height=\"32\" /></a>"
},
"jsx": {
"dark": "<a href=\"cursor://anysphere.cursor-deeplink/mcp/install?name=Xano MCP Server Name&config=eyJ0eXBlIjoic3NlIiwidXJsIjoiaHR0cHM6Ly94MTIzLXd1MHEtZHRhay5uNy54YW5vLmlvL3gyL21jcC82dmlfVkE2LS9tY3Avc3NlIn0=\"><img src=\"https://cursor.com/deeplink/mcp-install-dark.svg\" alt=\"Add Xano MCP Server Name MCP server to Cursor\" height=\"32\" /></a>",
"light": "<a href=\"cursor://anysphere.cursor-deeplink/mcp/install?name=Xano MCP Server Name&config=eyJ0eXBlIjoic3NlIiwidXJsIjoiaHR0cHM6Ly94MTIzLXd1MHEtZHRhay5uNy54YW5vLmlvL3gyL21jcC82dmlfVkE2LS9tY3Avc3NlIn0=\"><img src=\"https://cursor.com/deeplink/mcp-install-light.svg\" alt=\"Add Xano MCP Server Name MCP server to Cursor\" height=\"32\" /></a>"
}
}
`
Output Fields
deeplink
The raw Cursor deeplink URL that can be used programmatically or shared directly.
markdown
Ready-to-use Markdown install buttons with Cursor's official badge images:
dark: Dark theme install button
light: Light theme install button
html
HTML anchor tags with embedded install buttons:
dark: Dark theme HTML button
light: Light theme HTML button
jsx
JSX-compatible HTML for React components:
dark: Dark theme JSX button
light: Light theme JSX button
Usage Notes
The config parameter in the deeplink contains a base64-encoded JSON configuration
All markup formats include proper URL encoding for compatibility
The function supports both dark and light theme variants for different UI contexts
Install buttons use Cursor's official badge images hosted at cursor.com/deeplink/
Implementation Details
The function creates a Cursor deeplink following the format:
`
cursor://anysphere.cursor-deeplink/mcp/install?name={SERVERNAME}&config={BASE64CONFIG}
`
Where {BASE64_CONFIG} is a base64-encoded JSON object containing:
`json
{
"type": "{server_type}",
"url": "{mcpserverurl}"
}
`
AI
Updated 3 days ago
This action integrates with the Exa Search API to run smart web searches with optional content extraction (full text, highlights, summaries) and freshness controls. Exa can choose between keyword and neural (embeddings) search—or you can force a specific type, including fast for lower latency. You can also focus on categories (e.g., research papers, news), restrict to or exclude domains, and filter by crawl or publish dates. ([docs.exa.ai][1])
Inputs
| Name | Type | Required | Default | Description |
| -------------------- | --------------------------------------------- | -------: | ------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| query | text | Yes | – | The query string to search for. ([docs.exa.ai][1]) |
| type | enum (keyword | neural | fast | auto) | No | auto | Search strategy: keyword (SERP-like), neural (semantic), fast (streamlined neural+keyword), or auto (smart choose). ([docs.exa.ai][1]) |
| category | enum | No | – | Focus results by data type: company, research paper, news, pdf, github, tweet, personal site, linkedin profile, financial report. ([docs.exa.ai][1]) |
| userLocation | text | No | – | Two-letter ISO country code (e.g., US). ([docs.exa.ai][1]) |
| numResults | int (1–100) | No | 10 | Number of results to return. Limits vary by type: keyword up to 10; neural up to 100. ([docs.exa.ai][1]) |
| includeDomains | text[] | No | – | Only return results from these domains. ([docs.exa.ai][1]) |
| excludeDomains | text[] | No | – | Exclude results from these domains. ([docs.exa.ai][1]) |
| startCrawlDate | text (ISO 8601) | No | – | Only include links crawled after this date. ([docs.exa.ai][1]) |
| endCrawlDate | text (ISO 8601) | No | – | Only include links crawled before this date. ([docs.exa.ai][1]) |
| startPublishedDate | text (ISO 8601) | No | – | Only include links published after this date. ([docs.exa.ai][1]) |
| endPublishedDate | text (ISO 8601) | No | – | Only include links published before this date. ([docs.exa.ai][1]) |
| includeText | text[] | No | – | Strings that must appear in page text (currently 1 string, ≤5 words). ([docs.exa.ai][1]) |
| excludeText | text[] | No | – | Strings that must not appear in page text (currently 1 string, ≤5 words; checks first ~1000 words). ([docs.exa.ai][1]) |
| context | bool or object | No | – | If true, returns an LLM-ready context string; object may include maxCharacters. ([docs.exa.ai][1]) |
| moderation | bool | No | false | Enable safety filtering of search results. ([docs.exa.ai][1]) |
| contents | JSON object | No | – | Controls content extraction (e.g., text, highlights, summary, livecrawl, subpages, etc.). See Exa “Get contents” & livecrawl docs. ([docs.exa.ai][2]) |
| exa_key | registry|text | Yes | – | Your Exa API Key |
Note on “Fast” search: Exa Fast targets very low latency (p50 < ~425 ms) with streamlined neural+keyword. Use type="fast" when speed is critical. ([docs.exa.ai][3])
Function Stack
Build Parameters
Uses set_ifnotempty to construct the POST body with only provided fields:
Always sets query.
Optionally sets type, category, userLocation, numResults, includeDomains, excludeDomains, startCrawlDate, endCrawlDate, startPublishedDate, endPublishedDate, includeText, excludeText, context, moderation, contents.
API Request
POST https://api.exa.ai/search with JSON body and headers:
x-api-key: <from $env.exaapikey>
Content-Type: application/json
Timeout: 60s. ([docs.exa.ai][1])
Precondition
Ensures HTTP 200; returns error if not.
Response
Returns Exa’s JSON payload, including results[], resolvedSearchType, optional context, and any extracted contents (when requested). ([docs.exa.ai][1])
Example Usage
1) Simple neural/auto search with full text
Request
`json
{
"query": "Latest research in LLMs",
"contents": { "text": true }
}
`
Response (truncated)
`json
{
"resolvedSearchType": "neural",
"results": [
{
"title": "A Comprehensive Overview of Large Language Models",
"url": "https://arxiv.org/pdf/2307.06435.pdf",
"publishedDate": "2023-11-16T01:36:32.547Z",
"id": "https://arxiv.org/abs/2307.06435",
"text": "Abstract Large Language Models (LLMs) have recently...",
"highlights": ["Such requirements have limited their adoption..."],
"summary": "This overview paper on LLMs highlights key developments..."
}
]
}
`
(Fields reflect Exa’s reference example.) ([docs.exa.ai][1])
2) Fast search scoped to research papers since 2024
Request
`json
{
"query": "vision-language models retrieval evaluation",
"type": "fast",
"category": "research paper",
"startPublishedDate": "2024-01-01T00:00:00.000Z",
"numResults": 10,
"contents": { "text": true, "summary": true }
}
`
Why: fast, semantic+keyword blend; filtered to papers; pulls text + summaries. ([docs.exa.ai][3])
3) News-only, U.S. bias, with freshness and safety
Request
`json
{
"query": "FCC net neutrality order enforcement",
"category": "news",
"userLocation": "US",
"startPublishedDate": "2025-07-01T00:00:00.000Z",
"moderation": true,
"numResults": 10,
"contents": { "text": true, "highlights": true, "summary": true, "livecrawl": "preferred" }
}
`
Why: focuses on news, biases to US, filters by publish date, enables moderation, asks Exa to fetch fresh content but fall back gracefully via livecrawl: "preferred". ([docs.exa.ai][4])
4) Precision domain targeting with include/exclude text
Request
`json
{
"query": "vector database RAG production scaling",
"includeDomains": ["arxiv.org", "docs.pinecone.io"],
"excludeDomains": ["medium.com"],
"includeText": ["retrieval augmented"],
"excludeText": ["course"],
"numResults": 10,
"contents": { "text": true }
}
`
Why: restrict to trusted sources, ensure specific phrasing, exclude tutorial-like “course” content. (Note include/exclude text currently supports one phrase ≤5 words.) ([docs.exa.ai][1])
Output Shape (key fields)
resolvedSearchType: the actual search used (neural or keyword when auto).
results[]: each has title, url, publishedDate, author; may include text, highlights, summary, image, favicon, and subpages when contents is requested.
Optional context: LLM-ready compiled string of results if context=true. ([docs.exa.ai][1])
Notes & Best Practices
Result limits: keyword returns up to 10, neural up to 100 results per call; numResults default is 10. ([docs.exa.ai][1])
LiveCrawl choices:always (freshest, no cache), preferred (fresh with fallback), fallback (cache first), never (fastest, historical). Pick based on freshness needs. ([docs.exa.ai][5])
Contents extraction: For deeper reading or summarization, pass contents (e.g., { "text": true, "summary": true, "highlights": true }). Exa’s /contents endpoint underpins this feature. ([docs.exa.ai][2])
Subpage crawling: For richer site context (e.g., company research), you can include subpages & subpageTarget inside contents to crawl linked sections like “news” or “products.” Start small (5–10). ([docs.exa.ai][6])
Speed vs. quality: Use type="fast" when latency is critical; otherwise auto generally balances relevance and speed. ([docs.exa.ai][3])
Troubleshooting
401/403 – Auth error: Ensure x-api-key header is set to a valid key from your Exa dashboard. ([docs.exa.ai][1])
400 – Validation error: Check enum values (type, category) and date formats (ISO 8601). Also confirm numResults within limits for the chosen type. ([docs.exa.ai][1])
Empty results:Loosen includeText/excludeText phrases (remember: 1 phrase ≤5 words).
Remove restrictive includeDomains/excludeDomains.
Adjust startPublishedDate/endPublishedDate. ([docs.exa.ai][1])
Stale content: Add contents: { "livecrawl": "preferred" } (or "always" when you truly need real-time). ([docs.exa.ai][5])
Security & Configuration
The action pulls the API key from $env.exaapikey and sends it via the x-api-key header to https://api.exa.ai/search. Do not hardcode keys in inputs. ([docs.exa.ai][1])
References
Search API reference (parameters, defaults, limits, response fields). ([docs.exa.ai][1])
How search types work (auto, neural, keyword, fast). ([docs.exa.ai][7])
LiveCrawl options (always, preferred, fallback, never). ([docs.exa.ai][5])
Get contents (text/highlights/summary). ([docs.exa.ai][2])
Fast search announcement (latency details). ([docs.exa.ai][3])
Tip: If you want a single LLM-ready blob rather than per-result text, add "context": true in inputs; Exa will return context alongside results. ([docs.exa.ai][1])
[1]: https://docs.exa.ai/reference/search "Search - Exa"
[2]: https://docs.exa.ai/reference/get-contents?utm_source=chatgpt.com "Get contents - Exa"
[3]: https://docs.exa.ai/changelog/new-fast-search-type?utm_source=chatgpt.com "New Fast Search Type - Exa"
[4]: https://docs.exa.ai/changelog/livecrawl-preferred-option?utm_source=chatgpt.com "New Livecrawl Option: Preferred - Exa"
[5]: https://docs.exa.ai/reference/should-we-use-livecrawl?utm_source=chatgpt.com "Livecrawling Contents - Exa"
[6]: https://docs.exa.ai/reference/crawling-subpages-with-exa?utm_source=chatgpt.com "Crawling Subpages - Exa"
[7]: https://docs.exa.ai/reference/how-exa-search-works?utm_source=chatgpt.com "How Exa Search Works"
AI
Updated 19 hr ago
Gemini → Audio Understanding
Overview
This action enables you to perform audio understanding and analysis using the Gemini API. By referencing an existing, uploaded audio file and providing a question, you can instruct Gemini to analyze, summarize, or extract insights from the audio content.
Note: The file_uri must refer to an audio file that has already been uploaded and is accessible to the Gemini API. This action does not perform file uploads.
Inputs
| Name | Type | Required | Description |
| :-- | :-- | :-- | :-- |
| geminiapikey | text | Yes | Your Gemini API key (from settings registry). |
| file_uri | text | Yes | The URI of an audio file already uploaded to Google Gemini. |
| question | text | Yes | Your prompt, task, or question for Gemini about the audio content. |
Function Stack
Talk to Uploaded Content
Uses the given file_uri and question to instruct Gemini to analyze the specific audio file.
API Request
Posts a request to
`
https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash:generateContent?key=YOURAPIKEY
`
Including a payload such as:
`json
{
"contents": [
{
"parts": [
{ "filedata": { "fileuri": "<file_uri>" } },
{ "text": "<question>" }
]
}
]
}
`
Authenticates with your Gemini API key.
Parses Gemini's response.
Precondition
Ensures the API response status is 200.
Response
Returns the result from gemini_api.response.result.
Example Usage
Request
`json
{
"geminiapikey": "AIzaSyD...",
"file_uri": "https://storage.googleapis.com/path-to-your-audio.wav",
"question": "Summarize what is being discussed in this meeting recording."
}
`
Response
`json
{
"result": "The meeting discusses quarterly revenue figures, marketing strategy, and upcoming project deadlines."
}
`
Notes
The file_uri must already point to a publicly or API-accessible audio file uploaded to Google Gemini.
Common use cases: summarize calls, extract action items, identify speakers, or answer specific questions about audio content.
Ensure your API key’s quota and model access cover the requested usage.
Troubleshooting
INVALIDARGUMENT or NOTFOUND: Check that your file_uri is correct, exists, and is accessible.
PERMISSION_DENIED: Your Gemini API key might be invalid or lack the necessary permissions.
UNSUPPORTEDMEDIATYPE: Make sure your audio file format is supported.
For more help, refer to the Gemini API documentation.
References
Gemini API: Overview \& Docs
Gemini API: Supported Models
AI
Updated 15 days ago
Gemini → Audio Understanding
Overview
This action enables you to perform audio understanding and analysis using the Gemini API. By referencing an existing, uploaded audio file and providing a question, you can instruct Gemini to analyze, summarize, or extract insights from the audio content.
Note: The file_uri must refer to an audio file that has already been uploaded and is accessible to the Gemini API. This action does not perform file uploads.
Inputs
| Name | Type | Required | Description |
| :-- | :-- | :-- | :-- |
| geminiapikey | text | Yes | Your Gemini API key (from settings registry). |
| file_uri | text | Yes | The URI of an audio file already uploaded to Google Gemini. |
| question | text | Yes | Your prompt, task, or question for Gemini about the audio content. |
Function Stack
Talk to Uploaded Content
Uses the given file_uri and question to instruct Gemini to analyze the specific audio file.
API Request
Posts a request to
`
https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash:generateContent?key=YOURAPIKEY
`
Including a payload such as:
`json
{
"contents": [
{
"parts": [
{ "filedata": { "fileuri": "<file_uri>" } },
{ "text": "<question>" }
]
}
]
}
`
Authenticates with your Gemini API key.
Parses Gemini's response.
Precondition
Ensures the API response status is 200.
Response
Returns the result from gemini_api.response.result.
Example Usage
Request
`json
{
"geminiapikey": "AIzaSyD...",
"file_uri": "https://storage.googleapis.com/path-to-your-audio.wav",
"question": "Summarize what is being discussed in this meeting recording."
}
`
Response
`json
{
"result": "The meeting discusses quarterly revenue figures, marketing strategy, and upcoming project deadlines."
}
`
Notes
The file_uri must already point to a publicly or API-accessible audio file uploaded to Google Gemini.
Common use cases: summarize calls, extract action items, identify speakers, or answer specific questions about audio content.
Ensure your API key’s quota and model access cover the requested usage.
Troubleshooting
INVALIDARGUMENT or NOTFOUND: Check that your file_uri is correct, exists, and is accessible.
PERMISSION_DENIED: Your Gemini API key might be invalid or lack the necessary permissions.
UNSUPPORTEDMEDIATYPE: Make sure your audio file format is supported.
For more help, refer to the Gemini API documentation.
References
Gemini API: Overview \& Docs
Gemini API: Supported Models
AI
Updated 4 days ago
Gemini → Chat with PDF
Overview
This action enables you to interact with the Gemini API using a PDF file and a text question. The function accepts a reference to an already uploaded PDF file (as a file_uri) hosted on Google, and a user question, then queries Gemini’s generative content API to get a response.
Important: A file_uri must already be uploaded and available on Google before using this action. The function does not handle file uploads; it operates on an existing, accessible URI.
Inputs
| Name | Type | Required | Description |
| :-- | :-- | :-- | :-- |
| geminiapikey | text | Yes | Your Gemini API key (from settings registry). |
| file_uri | text | Yes | URI of the already-uploaded PDF file on Google. |
| question | text | Yes | The question to ask Gemini about the content of the uploaded PDF. |
Function Stack
Talk to Uploaded Content
The function receives file_uri and question as inputs.
API Request
Sends a POST request to:
`
https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash:generateContent?key=YOURAPIKEY
`
Using the following payload structure:
`json
{
"contents": [
{
"parts": [
{
"filedata": { "fileuri": "<file_uri>" }
},
{
"text": "<question>"
}
]
}
]
}
`
Authenticates using geminiapikey.
Returns the API response as gemini_api.
Precondition
Checks: gemini_api.response.status == 200
(Proceeds only if the API response is successful.)
Response
Returns: gemini_api.response.result
Example Usage
Request
`json
{
"geminiapikey": "AIzaSyD...",
"file_uri": "https://storage.googleapis.com/path-to-your-pdf.pdf",
"question": "Summarize the main arguments from this document."
}
`
Response
`json
{
"result": "The main arguments in this document are..."
}
`
Notes
The file_uri must be a direct URL to a PDF file that is already uploaded and publicly or appropriately accessible to the Gemini API.
The function does NOT upload files; ensure you upload your PDF first and obtain the URI.
The question can be any prompt or inquiry about the content of the PDF.
You must use a valid and enabled Gemini API key.
Troubleshooting
INVALIDARGUMENT or NOTFOUND: Verify the file_uri is correct, accessible, and the PDF exists.
PERMISSION_DENIED: Ensure your API key has access, and the file permissions are properly set on Google.
REQUEST_DENIED: Check your API key and ensure billing or usage limits have not been exceeded.
Other API errors: Refer to Gemini API documentation for more details.
References
Gemini API: Overview \& Docs
Gemini API: Supported Models
AI
Updated 4 days ago
Gemini → Chat with PDF
Overview
This action enables you to interact with the Gemini API using a PDF file and a text question. The function accepts a reference to an already uploaded PDF file (as a file_uri) hosted on Google, and a user question, then queries Gemini’s generative content API to get a response.
Important: A file_uri must already be uploaded and available on Google before using this action. The function does not handle file uploads; it operates on an existing, accessible URI.
Inputs
| Name | Type | Required | Description |
| :-- | :-- | :-- | :-- |
| geminiapikey | text | Yes | Your Gemini API key (from settings registry). |
| file_uri | text | Yes | URI of the already-uploaded PDF file on Google. |
| question | text | Yes | The question to ask Gemini about the content of the uploaded PDF. |
Function Stack
Talk to Uploaded Content
The function receives file_uri and question as inputs.
API Request
Sends a POST request to:
`
https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash:generateContent?key=YOURAPIKEY
`
Using the following payload structure:
`json
{
"contents": [
{
"parts": [
{
"filedata": { "fileuri": "<file_uri>" }
},
{
"text": "<question>"
}
]
}
]
}
`
Authenticates using geminiapikey.
Returns the API response as gemini_api.
Precondition
Checks: gemini_api.response.status == 200
(Proceeds only if the API response is successful.)
Response
Returns: gemini_api.response.result
Example Usage
Request
`json
{
"geminiapikey": "AIzaSyD...",
"file_uri": "https://storage.googleapis.com/path-to-your-pdf.pdf",
"question": "Summarize the main arguments from this document."
}
`
Response
`json
{
"result": "The main arguments in this document are..."
}
`
Notes
The file_uri must be a direct URL to a PDF file that is already uploaded and publicly or appropriately accessible to the Gemini API.
The function does NOT upload files; ensure you upload your PDF first and obtain the URI.
The question can be any prompt or inquiry about the content of the PDF.
You must use a valid and enabled Gemini API key.
Troubleshooting
INVALIDARGUMENT or NOTFOUND: Verify the file_uri is correct, accessible, and the PDF exists.
PERMISSION_DENIED: Ensure your API key has access, and the file permissions are properly set on Google.
REQUEST_DENIED: Check your API key and ensure billing or usage limits have not been exceeded.
Other API errors: Refer to Gemini API documentation for more details.
References
Gemini API: Overview \& Docs
Gemini API: Supported Models
AI
Updated 1 month ago
Gemini → Check Video Job Status
Overview
This action allows you to check the status of a video generation job submitted to the Gemini API. By providing the unique operation name returned from the video generation request, you can monitor if the video is still being processed or is ready for download.
Inputs
| Name | Type | Required | Description |
| :-- | :-- | :-- | :-- |
| geminiapikey | text | Yes | Your Gemini API key (from settings registry). |
| name | text | Yes | The operation name from the video generation response. |
Function Stack
Gemini API Request
Sends a GET request to:
`
https://generativelanguage.googleapis.com/v1beta/{name}
`
where {name} is the value returned in the original video generation operation.
Precondition
Validates the response status as 200 (success).
Response
Returns the result, showing the full status details for the video job.
Example Usage
Request
`json
{
"geminiapikey": "AIzaSyD...",
"name": "models/veo-3.0-generate-preview/operations/abcd1234"
}
`
Response
`json
{
"done": true,
"response": {
"generatedVideos": [
{
"video": {
"uri": "https://generativelanguage.googleapis.com/v1beta/files/abcd1234...",
"mimeType": "video/mp4"
},
"durationSeconds": 5
// ...other metadata
}
]
}
}
`
Notes
Use the operation name (such as models/veo-3.0-generate-preview/operations/abcd1234) exactly as provided in the video generation response.
Poll this endpoint periodically until you see "done": true in the response.
Once the job is complete, video URIs and result metadata will be included in the response.
Troubleshooting
PERMISSIONDENIED or INVALIDARGUMENT: Verify your API key and the correct name value.
Operation appears stuck: Wait and poll again, as rendering may take several seconds or longer for complex or high-resolution videos.
Missing video in response: Ensure you are referencing the correct and currently active operation.
References
Gemini API: Video Job Operations
AI
Updated 20 days ago
Gemini → Check Video Job Status
Overview
This action allows you to check the status of a video generation job submitted to the Gemini API. By providing the unique operation name returned from the video generation request, you can monitor if the video is still being processed or is ready for download.
Inputs
| Name | Type | Required | Description |
| :-- | :-- | :-- | :-- |
| geminiapikey | text | Yes | Your Gemini API key (from settings registry). |
| name | text | Yes | The operation name from the video generation response. |
Function Stack
Gemini API Request
Sends a GET request to:
`
https://generativelanguage.googleapis.com/v1beta/{name}
`
where {name} is the value returned in the original video generation operation.
Precondition
Validates the response status as 200 (success).
Response
Returns the result, showing the full status details for the video job.
Example Usage
Request
`json
{
"geminiapikey": "AIzaSyD...",
"name": "models/veo-3.0-generate-preview/operations/abcd1234"
}
`
Response
`json
{
"done": true,
"response": {
"generatedVideos": [
{
"video": {
"uri": "https://generativelanguage.googleapis.com/v1beta/files/abcd1234...",
"mimeType": "video/mp4"
},
"durationSeconds": 5
// ...other metadata
}
]
}
}
`
Notes
Use the operation name (such as models/veo-3.0-generate-preview/operations/abcd1234) exactly as provided in the video generation response.
Poll this endpoint periodically until you see "done": true in the response.
Once the job is complete, video URIs and result metadata will be included in the response.
Troubleshooting
PERMISSIONDENIED or INVALIDARGUMENT: Verify your API key and the correct name value.
Operation appears stuck: Wait and poll again, as rendering may take several seconds or longer for complex or high-resolution videos.
Missing video in response: Ensure you are referencing the correct and currently active operation.
References
Gemini API: Video Job Operations
AI
Updated 1 month ago
Gemini → Generate Content
Overview
This action lets you generate text or general AI content using the Gemini API. You specify the model name and a prompt, then the function sends this request to Gemini and returns the generated response.
Inputs
| Name | Type | Required | Description |
| :-- | :-- | :-- | :-- |
| geminiapikey | text (registry) | Yes | Your Gemini API key from the settings registry. |
| model | text | Yes | The Gemini model to use (e.g., gemini-1.5-flash). |
| prompt | text | Yes | The prompt or question for Gemini to generate content for. |
Function Stack
Gemini API Request
Sends a POST request to:
`
https://generativelanguage.googleapis.com/v1beta/models/{model}:generateContent?key={geminiapikey}
`
The body of the request includes the user-supplied prompt.
The response is saved as gemini_api.
Precondition
Verifies the API response status is 200 to ensure the request succeeded.
Response
Returns the content from gemini_api.response.result.
Example Usage
Request
`json
{
"geminiapikey": "AIzaSyD...",
"model": "gemini-1.5-flash",
"prompt": "Summarize the latest research trends in artificial intelligence."
}
`
Response
`json
{
"result": "Recent AI research trends include improvements in large language models, multimodal AI, edge computing integration, AI safety efforts, and reinforcement learning advancements."
}
`
Notes
Select a valid Gemini model name for your use case.
The geminiapikey must have the appropriate access and quota.
The prompt should be clear for best results from the model.
For more advanced outputs (code, lists, summaries, etc.), craft your prompt accordingly.
Troubleshooting
PERMISSIONDENIED or INVALIDARGUMENT: Check if your API key and model name are correct and supported.
Empty or incomplete response: Revise the prompt or try a different model.
Other errors: Consult the Gemini API documentation for troubleshooting and more usage instructions.
References
Gemini API: Documentation \& Models
Gemini API: Overview
AI
Updated 1 month ago
Gemini → Generate Content
Overview
This action lets you generate text or general AI content using the Gemini API. You specify the model name and a prompt, then the function sends this request to Gemini and returns the generated response.
Inputs
| Name | Type | Required | Description |
| :-- | :-- | :-- | :-- |
| geminiapikey | text (registry) | Yes | Your Gemini API key from the settings registry. |
| model | text | Yes | The Gemini model to use (e.g., gemini-1.5-flash). |
| prompt | text | Yes | The prompt or question for Gemini to generate content for. |
Function Stack
Gemini API Request
Sends a POST request to:
`
https://generativelanguage.googleapis.com/v1beta/models/{model}:generateContent?key={geminiapikey}
`
The body of the request includes the user-supplied prompt.
The response is saved as gemini_api.
Precondition
Verifies the API response status is 200 to ensure the request succeeded.
Response
Returns the content from gemini_api.response.result.
Example Usage
Request
`json
{
"geminiapikey": "AIzaSyD...",
"model": "gemini-1.5-flash",
"prompt": "Summarize the latest research trends in artificial intelligence."
}
`
Response
`json
{
"result": "Recent AI research trends include improvements in large language models, multimodal AI, edge computing integration, AI safety efforts, and reinforcement learning advancements."
}
`
Notes
Select a valid Gemini model name for your use case.
The geminiapikey must have the appropriate access and quota.
The prompt should be clear for best results from the model.
For more advanced outputs (code, lists, summaries, etc.), craft your prompt accordingly.
Troubleshooting
PERMISSIONDENIED or INVALIDARGUMENT: Check if your API key and model name are correct and supported.
Empty or incomplete response: Revise the prompt or try a different model.
Other errors: Consult the Gemini API documentation for troubleshooting and more usage instructions.
References
Gemini API: Documentation \& Models
Gemini API: Overview
AI
Updated 4 hr ago
Gemini → Generate Image
Overview
This action allows you to generate an image using the Gemini API by providing a model choice and a prompt description. The function sends your image generation request to Gemini, retrieves the resulting image data, and saves it as a downloadable file.
Inputs
| Name | Type | Required | Description |
| :-- | :-- | :-- | :-- |
| geminiapikey | text | Yes | Your Gemini API key (from settings registry). |
| model | text | Yes | The name of the Gemini model to use for image generation |
| prompt | text | Yes | The textual description for the image you wish to generate |
Function Stack
Gemini API Request
Submits the provided prompt to the selected image-generation model at:
`
https://generativelanguage.googleapis.com/v1beta/models/{model}:generateContent?key={API_KEY}
`
Stores the Gemini response as gemini_api.
Precondition
Ensures the request was successful with response status 200.
Save Image
Extracts image data from
gemini_api.response.result.candidates.content.parts.inlineData.data
Saves the image as image_response.png.
Response
Returns the complete Gemini API response (gemini_api.response.result).
Example Usage
Request
`json
{
"geminiapikey": "AIzaSyD...",
"model": "gemini-1.5-flash",
"prompt": "A futuristic city skyline at sunset in vibrant colors"
}
`
Response
`json
{
"candidates": [
{
"content": {
"parts": [
{ "text": "A futuristic city skyline at sunset..." },
{ "inlineData": { "mimeType": "image/png", "data": "iVBORw0KGgoAAAANSUhEUg..." } }
]
}
}
]
// ... other response data
}
`
Notes
The model parameter lets you choose between available Gemini image models (e.g., gemini-1.5-flash). Ensure the selected model supports image generation.
The generated image is saved as a file (image_response.png), which you can directly use or download.
The prompt should be clear and descriptive for best results.
Make sure your API key has access to the relevant model and sufficient usage quota.
Troubleshooting
PERMISSIONDENIED or INVALIDARGUMENT: Check your API key, model name, and prompt formatting.
No image output: If inlineData.data is empty, check your model choice and try another prompt.
Image file corrupt or unreadable: Confirm the returned data and file creation process completed successfully.
References
Gemini API: Image Generation
Gemini API Documentation
AI
Updated 1 month ago
Gemini → Generate Image
Overview
This action allows you to generate an image using the Gemini API by providing a model choice and a prompt description. The function sends your image generation request to Gemini, retrieves the resulting image data, and saves it as a downloadable file.
Inputs
| Name | Type | Required | Description |
| :-- | :-- | :-- | :-- |
| geminiapikey | text | Yes | Your Gemini API key (from settings registry). |
| model | text | Yes | The name of the Gemini model to use for image generation |
| prompt | text | Yes | The textual description for the image you wish to generate |
Function Stack
Gemini API Request
Submits the provided prompt to the selected image-generation model at:
`
https://generativelanguage.googleapis.com/v1beta/models/{model}:generateContent?key={API_KEY}
`
Stores the Gemini response as gemini_api.
Precondition
Ensures the request was successful with response status 200.
Save Image
Extracts image data from
gemini_api.response.result.candidates.content.parts.inlineData.data
Saves the image as image_response.png.
Response
Returns the complete Gemini API response (gemini_api.response.result).
Example Usage
Request
`json
{
"geminiapikey": "AIzaSyD...",
"model": "gemini-1.5-flash",
"prompt": "A futuristic city skyline at sunset in vibrant colors"
}
`
Response
`json
{
"candidates": [
{
"content": {
"parts": [
{ "text": "A futuristic city skyline at sunset..." },
{ "inlineData": { "mimeType": "image/png", "data": "iVBORw0KGgoAAAANSUhEUg..." } }
]
}
}
]
// ... other response data
}
`
Notes
The model parameter lets you choose between available Gemini image models (e.g., gemini-1.5-flash). Ensure the selected model supports image generation.
The generated image is saved as a file (image_response.png), which you can directly use or download.
The prompt should be clear and descriptive for best results.
Make sure your API key has access to the relevant model and sufficient usage quota.
Troubleshooting
PERMISSIONDENIED or INVALIDARGUMENT: Check your API key, model name, and prompt formatting.
No image output: If inlineData.data is empty, check your model choice and try another prompt.
Image file corrupt or unreadable: Confirm the returned data and file creation process completed successfully.
References
Gemini API: Image Generation
Gemini API Documentation
AI
Updated 3 days ago
Gemini → Generate Video
Overview
This action enables video generation through Google's Gemini API (with models like Veo 3). By providing a prompt and configuring video parameters, you can create unique AI-generated video content via API.
Inputs
| Name | Type | Required | Description |
| :-- | :-- | :-- | :-- |
| geminiapikey | text | Yes | Your Gemini API key (from settings registry). |
| model | text | Yes | The model to use (e.g., veo-3.0-generate-preview). |
| prompt | text | Yes | The descriptive prompt for the video. |
| aspect_ratio | enum | Yes | Desired output aspect ratio (e.g., "16:9", "9:16"). |
| persongeneration | enum | Yes | Person appearance policy (e.g., "allowall", "dont_allow"). |
Function Stack
Gemini API Request
Sends a POST request to:
`
https://generativelanguage.googleapis.com/v1beta/models/{model}
`
Passes the prompt and video-specific parameters (aspect ratio, person generation policy) in the body.
Precondition
Checks for a successful (status 200) response.
Response
Returns the API response result (including video metadata and download instructions).
Example Usage
Request
`json
{
"geminiapikey": "AIzaSyD...",
"model": "veo-3.0-generate-preview",
"prompt": "A cinematic wide shot panning over a futuristic city skyline at sunset",
"aspect_ratio": "16:9",
"persongeneration": "allowall"
}
`
Response
`json
{
"name": "/veo.../.../.."
}
`
Notes
The model should be set to the latest supported Gemini video model. Use Veo 3 for best results.
You can adjust the prompt and configuration for different styles, content, and policies.
Person generation controls the presence and portrayal of people in the resulting video.
Download links for generated videos are temporary—be sure to save your files promptly.
Troubleshooting
PERMISSIONDENIED or INVALIDARGUMENT: Check your API key, model name, and input parameters.
Video not returned or incomplete: Simplify your prompt or relax constraints for best performance.
For current API documentation and supported parameters, reference the Gemini API documentation.
AI
Updated 4 days ago
Gemini → Generate Video
Overview
This action enables video generation through Google's Gemini API (with models like Veo 3). By providing a prompt and configuring video parameters, you can create unique AI-generated video content via API.
Inputs
| Name | Type | Required | Description |
| :-- | :-- | :-- | :-- |
| geminiapikey | text | Yes | Your Gemini API key (from settings registry). |
| model | text | Yes | The model to use (e.g., veo-3.0-generate-preview). |
| prompt | text | Yes | The descriptive prompt for the video. |
| aspect_ratio | enum | Yes | Desired output aspect ratio (e.g., "16:9", "9:16"). |
| persongeneration | enum | Yes | Person appearance policy (e.g., "allowall", "dont_allow"). |
Function Stack
Gemini API Request
Sends a POST request to:
`
https://generativelanguage.googleapis.com/v1beta/models/{model}
`
Passes the prompt and video-specific parameters (aspect ratio, person generation policy) in the body.
Precondition
Checks for a successful (status 200) response.
Response
Returns the API response result (including video metadata and download instructions).
Example Usage
Request
`json
{
"geminiapikey": "AIzaSyD...",
"model": "veo-3.0-generate-preview",
"prompt": "A cinematic wide shot panning over a futuristic city skyline at sunset",
"aspect_ratio": "16:9",
"persongeneration": "allowall"
}
`
Response
`json
{
"name": "/veo.../.../.."
}
`
Notes
The model should be set to the latest supported Gemini video model. Use Veo 3 for best results.
You can adjust the prompt and configuration for different styles, content, and policies.
Person generation controls the presence and portrayal of people in the resulting video.
Download links for generated videos are temporary—be sure to save your files promptly.
Troubleshooting
PERMISSIONDENIED or INVALIDARGUMENT: Check your API key, model name, and input parameters.
Video not returned or incomplete: Simplify your prompt or relax constraints for best performance.
For current API documentation and supported parameters, reference the Gemini API documentation.
AI
Updated 1 month ago
Gemini → Image Understanding
Overview
This action enables you to analyze and understand images using the Gemini API. By providing an image file along with a prompt (your question or instruction), you can instruct Gemini to interpret, describe, or extract insights from the image content using a selected Gemini model.
Inputs
| Name | Type | Required | Description |
| :-- | :-- | :-- | :-- |
| geminiapikey | text (registry) | Yes | Your Gemini API key (from settings registry). |
| model | text | Yes | The Gemini model to use (e.g., gemini-1.5-flash, gemini-pro-vision). |
| prompt | text | Yes | The question, instruction, or prompt about the image. |
| image | file resource | Yes | The image file you want Gemini to analyze. |
Function Stack
Create file resource from image
Reads the file payload from the provided image input.
Gemini API Request
Sends a POST request to:
`
https://generativelanguage.googleapis.com/v1beta/models/{model}:generateContent?key={geminiapikey}
`
The request body includes both the raw image data and the prompt, following Gemini API’s content structure.
Precondition
Verifies that the response status is 200 to continue processing.
Response
Returns the result from the response: gemini_api.response.result.
Example Usage
Request
`json
{
"geminiapikey": "AIzaSyD...",
"model": "gemini-1.5-flash",
"prompt": "Describe what is happening in this image.",
"image": "(attach image file)"
}
`
Response
`json
{
"result": "The image shows a group of people hiking on a mountain trail under a clear sky."
}
`
Notes
The model parameter lets you select among available Gemini models for vision tasks (such as gemini-1.5-flash or similar).
The image input must be an actual image file (PNG, JPEG, etc.).
Ensure your Gemini API key has the necessary permissions and quota for vision/model usage.
This action handles encoding and passing the image to Gemini as required by the API.
Troubleshooting
PERMISSION_DENIED or UNAUTHORIZED: Check your Gemini API key and model permissions.
INVALID_ARGUMENT: Make sure your prompt is a string and your image is a valid file format.
UNSUPPORTEDMEDIATYPE: Only supported image formats (like jpeg, png) can be processed.
Other errors: Refer to the Gemini API documentation for detailed troubleshooting guidance.
References
Gemini API: Content Understanding
Gemini API: Overview \& Vision Models
AI
Updated 1 day ago
Gemini → Image Understanding
Overview
This action enables you to analyze and understand images using the Gemini API. By providing an image file along with a prompt (your question or instruction), you can instruct Gemini to interpret, describe, or extract insights from the image content using a selected Gemini model.
Inputs
| Name | Type | Required | Description |
| :-- | :-- | :-- | :-- |
| geminiapikey | text (registry) | Yes | Your Gemini API key (from settings registry). |
| model | text | Yes | The Gemini model to use (e.g., gemini-1.5-flash, gemini-pro-vision). |
| prompt | text | Yes | The question, instruction, or prompt about the image. |
| image | file resource | Yes | The image file you want Gemini to analyze. |
Function Stack
Create file resource from image
Reads the file payload from the provided image input.
Gemini API Request
Sends a POST request to:
`
https://generativelanguage.googleapis.com/v1beta/models/{model}:generateContent?key={geminiapikey}
`
The request body includes both the raw image data and the prompt, following Gemini API’s content structure.
Precondition
Verifies that the response status is 200 to continue processing.
Response
Returns the result from the response: gemini_api.response.result.
Example Usage
Request
`json
{
"geminiapikey": "AIzaSyD...",
"model": "gemini-1.5-flash",
"prompt": "Describe what is happening in this image.",
"image": "(attach image file)"
}
`
Response
`json
{
"result": "The image shows a group of people hiking on a mountain trail under a clear sky."
}
`
Notes
The model parameter lets you select among available Gemini models for vision tasks (such as gemini-1.5-flash or similar).
The image input must be an actual image file (PNG, JPEG, etc.).
Ensure your Gemini API key has the necessary permissions and quota for vision/model usage.
This action handles encoding and passing the image to Gemini as required by the API.
Troubleshooting
PERMISSION_DENIED or UNAUTHORIZED: Check your Gemini API key and model permissions.
INVALID_ARGUMENT: Make sure your prompt is a string and your image is a valid file format.
UNSUPPORTEDMEDIATYPE: Only supported image formats (like jpeg, png) can be processed.
Other errors: Refer to the Gemini API documentation for detailed troubleshooting guidance.
References
Gemini API: Content Understanding
Gemini API: Overview \& Vision Models
AI
Updated 1 month ago
Gemini → Upload File
Overview
This action allows you to upload a file (such as a PDF) to Google Gemini’s file API. Once the file is uploaded, you receive a direct file URI, which can be used in subsequent Gemini API operations (for example, with the “Chat with PDF” action).
Inputs
| Name | Type | Required | Description |
| :-- | :-- | :-- | :-- |
| geminiapikey | text | Yes | Your Gemini API key (from settings registry). |
| file | file | Yes | The file resource to be uploaded to Gemini (e.g., PDF). |
Function Stack
Get File Resource Data
Reads the file resource provided as input.
Initiate Upload
Sends a request to
`
https://generativelanguage.googleapis.com/upload/v1beta/files?key=YOUR_KEY
`
to obtain an upload URL from Gemini.
Extract Upload URL
Parses the response headers to retrieve the upload endpoint for the file.
Upload File to URL
Uploads the file directly to the returned upload URL.
Response
Returns the uploaded file's URI (file.uri) from the Gemini API response.
Example Usage
Request
Supply your Gemini API key and a file (e.g., PDF) you wish to upload.
Response
`json
{
"file": {
"uri": "https://generativelanguage.googleapis.com/v1beta/files/abcdefg1234567"
}
}
`
Notes
The returned file URI is required for interacting with actions such as "Chat with PDF."
Ensure your Gemini API key has the necessary permissions and quota.
Uploaded files must meet Gemini’s input requirements and size restrictions.
You cannot use a local file path as file, it must be an uploaded resource.
Troubleshooting
PERMISSIONDENIED or REQUESTDENIED: Check your API key and account permissions.
Upload errors: Ensure the file format and size are supported by Gemini.
Missing file URI: If the upload response does not return a URI, review the API and upload procedures.
References
Gemini API: Upload Files
Gemini API Documentation
AI
Updated 1 day ago
Gemini → Upload File
Overview
This action allows you to upload a file (such as a PDF) to Google Gemini’s file API. Once the file is uploaded, you receive a direct file URI, which can be used in subsequent Gemini API operations (for example, with the “Chat with PDF” action).
Inputs
| Name | Type | Required | Description |
| :-- | :-- | :-- | :-- |
| geminiapikey | text | Yes | Your Gemini API key (from settings registry). |
| file | file | Yes | The file resource to be uploaded to Gemini (e.g., PDF). |
Function Stack
Get File Resource Data
Reads the file resource provided as input.
Initiate Upload
Sends a request to
`
https://generativelanguage.googleapis.com/upload/v1beta/files?key=YOUR_KEY
`
to obtain an upload URL from Gemini.
Extract Upload URL
Parses the response headers to retrieve the upload endpoint for the file.
Upload File to URL
Uploads the file directly to the returned upload URL.
Response
Returns the uploaded file's URI (file.uri) from the Gemini API response.
Example Usage
Request
Supply your Gemini API key and a file (e.g., PDF) you wish to upload.
Response
`json
{
"file": {
"uri": "https://generativelanguage.googleapis.com/v1beta/files/abcdefg1234567"
}
}
`
Notes
The returned file URI is required for interacting with actions such as "Chat with PDF."
Ensure your Gemini API key has the necessary permissions and quota.
Uploaded files must meet Gemini’s input requirements and size restrictions.
You cannot use a local file path as file, it must be an uploaded resource.
Troubleshooting
PERMISSIONDENIED or REQUESTDENIED: Check your API key and account permissions.
Upload errors: Ensure the file format and size are supported by Gemini.
Missing file URI: If the upload response does not return a URI, review the API and upload procedures.
References
Gemini API: Upload Files
Gemini API Documentation
AI
Updated 1 month ago
Xano Action Documentation: Google Maps → Autocomplete API
Overview
This Xano action integrates with the Google Maps Places Autocomplete API to provide location-based suggestions as users type. It is designed to enhance user experience by offering predictive search results for addresses and places, using various optional parameters to refine the results.
Inputs
| Name | Type | Required | Description |
| :-- | :-- | :-- | :-- |
| googleapikey | text | Yes | Your Google Maps API key (from settings registry). |
| input | text | Yes | The text string for which to return place predictions. |
| session_token | text | No | A unique token for grouping Autocomplete requests (improves billing \& usage). |
| field_mask | text | No | Comma-separated list of fields to include in the response. |
| latitude | text | No | Latitude for location biasing. |
| longitude | text | No | Longitude for location biasing. |
| radius | integer | No | Radius (in meters) for location biasing. |
| region_code | text | No | Region code to bias results (e.g., "us", "in"). |
Function Stack
Prepare Parameters
Prepare field_mask Parameter:
Sets the field_mask variable to "*" if not provided, or uses the input value.
Prepare session_token:
Uses the provided session_token or generates a new UUID if not supplied.
API Request
API Request:
Sends a GET request to
https://places.googleapis.com/v1/places:autocomplete
with the relevant parameters (input, sessiontoken, fieldmask, latitude, longitude, radius, regioncode, and googleapi_key).
Precondition
Checks that the API response status is 200 and that a result is present.
Response
Returns the API response result and the session_token.
Example Usage
Request
`json
{
"googleapikey": "AIzaSyD...",
"input": "221B Baker Street",
"latitude": "51.5237",
"longitude": "-0.1585",
"radius": 500,
"region_code": "gb"
}
`
Response
`json
{
"api_response": [
{
"placeid": "ChIJd8BlQ2BZwokRAFUEcmqrcA",
"description": "221B Baker St, Marylebone, London NW1 6XE, UK",
// ... other fields as specified by field_mask
}
// ... additional predictions
],
"session_token": "d7b6fbb8-4a1d-4c7a-8b8e-9d2d7a6c1e2f"
}
`
Notes
The googleapikey must have Places API enabled.
Using a session_token is recommended for grouping user sessions and optimizing billing.
The field_mask parameter controls which fields are returned; use "*" to return all available fields.
Location and region parameters help bias results to a specific area or country.
Troubleshooting
REQUEST_DENIED: Ensure your API key is correct and has Places API access enabled.
INVALID_REQUEST: Check that the required input parameter is provided.
ZERO_RESULTS: No predictions found for the input; try adjusting the input or location bias.
Other Errors: Refer to the Google Maps Places API documentation for detailed error handling.
References
Google Maps Places Autocomplete API Documentation
Google Maps API Error Messages
Development & Coding
Updated 2 days ago
Xano Action Documentation: Google Maps → Autocomplete API
Overview
This Xano action integrates with the Google Maps Places Autocomplete API to provide location-based suggestions as users type. It is designed to enhance user experience by offering predictive search results for addresses and places, using various optional parameters to refine the results.
Inputs
| Name | Type | Required | Description |
| :-- | :-- | :-- | :-- |
| googleapikey | text | Yes | Your Google Maps API key (from settings registry). |
| input | text | Yes | The text string for which to return place predictions. |
| session_token | text | No | A unique token for grouping Autocomplete requests (improves billing \& usage). |
| field_mask | text | No | Comma-separated list of fields to include in the response. |
| latitude | text | No | Latitude for location biasing. |
| longitude | text | No | Longitude for location biasing. |
| radius | integer | No | Radius (in meters) for location biasing. |
| region_code | text | No | Region code to bias results (e.g., "us", "in"). |
Function Stack
Prepare Parameters
Prepare field_mask Parameter:
Sets the field_mask variable to "*" if not provided, or uses the input value.
Prepare session_token:
Uses the provided session_token or generates a new UUID if not supplied.
API Request
API Request:
Sends a GET request to
https://places.googleapis.com/v1/places:autocomplete
with the relevant parameters (input, sessiontoken, fieldmask, latitude, longitude, radius, regioncode, and googleapi_key).
Precondition
Checks that the API response status is 200 and that a result is present.
Response
Returns the API response result and the session_token.
Example Usage
Request
`json
{
"googleapikey": "AIzaSyD...",
"input": "221B Baker Street",
"latitude": "51.5237",
"longitude": "-0.1585",
"radius": 500,
"region_code": "gb"
}
`
Response
`json
{
"api_response": [
{
"placeid": "ChIJd8BlQ2BZwokRAFUEcmqrcA",
"description": "221B Baker St, Marylebone, London NW1 6XE, UK",
// ... other fields as specified by field_mask
}
// ... additional predictions
],
"session_token": "d7b6fbb8-4a1d-4c7a-8b8e-9d2d7a6c1e2f"
}
`
Notes
The googleapikey must have Places API enabled.
Using a session_token is recommended for grouping user sessions and optimizing billing.
The field_mask parameter controls which fields are returned; use "*" to return all available fields.
Location and region parameters help bias results to a specific area or country.
Troubleshooting
REQUEST_DENIED: Ensure your API key is correct and has Places API access enabled.
INVALID_REQUEST: Check that the required input parameter is provided.
ZERO_RESULTS: No predictions found for the input; try adjusting the input or location bias.
Other Errors: Refer to the Google Maps Places API documentation for detailed error handling.
References
Google Maps Places Autocomplete API Documentation
Google Maps API Error Messages
Development & Coding
Updated 1 month ago