Overview
For organizations standing up a new Mobaro environment, restructuring an existing one, or syncing Location data from an upstream system (a CMMS, an ERP, an internal site catalog), the Mobaro API is the right tool to create and update Locations at scale. This article covers the Location-specific workflow — where the endpoints live, the dependency order for hierarchical structures, common gotchas, and a worked example. CSV-based imports are also possible, but are currently performed by the Mobaro support and CSM teams rather than self-serve; see CSV imports below for how to request one.
Why this matters: Locations are the spine of Mobaro — Schedules, Assignments, Assets, Notification Rules, and reporting all hang off them. Building the Location tree manually through the UI for hundreds of sites is slow and error-prone, and it disconnects Mobaro from whatever upstream system already holds the source-of-truth. An API-driven sync keeps Mobaro accurate as your real-world site list evolves; a one-time bulk import gets a new environment to a working state in hours instead of weeks.
Required access: Bulk Location operations need a valid Mobaro API key. The User the key authenticates as must have Locations: Modify in their Role to create or edit Locations. See Creating and Managing API Keys for setup.
Where the endpoint reference lives
The complete, authoritative reference for Location endpoints — including request bodies, query parameters, and response schemas — is published in the Mobaro API documentation:
This article covers the workflow and the gotchas; refer to the docs for exact field names, types, and validation rules.
The Location endpoints at a glance
The Location endpoints follow Mobaro's standard REST conventions (HTTPS + JSON, X-Api-Key header). The shape:
Operation | Method & endpoint | Use it for |
List Locations |
| Pull existing Locations for sync, audit, or snapshot. |
Get a single Location |
| Fetch one Location's full detail by ID. |
Create a Location |
| Bulk import: one POST per Location. |
Update a Location |
| Edit a single Location by ID — for sync from upstream systems. |
Delete a Location |
| Remove a Location. |
Note: There is no single "bulk create" endpoint that accepts an array of Locations. Bulk imports are performed by issuing one POST per Location, typically in a script that iterates over your source data. The same pattern applies to Location Groups via the parallel /api/customers/locationgroups endpoints — refer to the API docs for those.
A typical bulk import workflow
1. Prepare your source data
Start with a clean, structured source: a CSV, Excel sheet, or export from your existing site catalog. Each row should represent one Location. At minimum, capture:
Name — required.
Parent Location — if applicable, the parent Location's ID or External ID (the parent must be created first). Top-level Locations have no parent.
External ID — your stable identifier from the source system. Strongly recommended for any sync workflow; it's your idempotency anchor.
Time zone — important if the Location's downtime, schedules, or operating hours need to be calculated against local time. See How operating hours affect downtime tracking.
Any other supported fields you want to populate (Code, Description, custom properties, geographic coordinates, etc.).
2. Decide on hierarchy depth
Locations in Mobaro are hierarchical — a park contains zones, zones contain attractions, attractions contain individual ride positions or queue lines. Before importing, decide how deep your tree needs to be. Going too shallow means losing the granularity Reports and Schedules need; going too deep creates a tree that's painful to navigate. Three to four levels is a common sweet spot.
Best practice: Sketch the hierarchy on paper (or in a spreadsheet) before writing the import script. Walk through real workflows — "where does a Schedule for ride inspections live?" "where does an F&B audit log fit?" — and confirm your tree supports them. Restructuring after import is much harder than getting it right the first time.
3. Import in dependency order
If your data includes a parent/child hierarchy (it almost always does), you must import top-level Locations before their children. A child's POST can reference its parent only after the parent exists in Mobaro. The simplest pattern: sort your source data by hierarchy depth, then iterate top-down.
Critical: Importing a child before its parent will return an error or create an orphaned Location (parent not set). Always sort by depth first. If you're using External IDs from your source system, build a map of {externalId → Mobaro ID} as you go and use it to resolve parent references on subsequent inserts.
4. Verify after import
After the import completes, pull the Location list (GET /api/customers/locations) and reconcile against your source data. Spot-check the parent/child relationships on key Locations to confirm the tree is intact. Open the Mobaro Backend and visually walk the tree from the Locations page — anything that doesn't look right is much easier to fix now than after Schedules and Assignments are layered on top.
Worked example: create a single Location
A minimal POST to create a child Location under an existing parent:
curl "https://app.mobaro.com/api/customers/locations" \
--request POST \
--header "Content-Type: application/json" \
--header "X-Api-Key: YOUR_SECRET_TOKEN" \
--data '{
"name": "Galaxy Coaster",
"code": "GC-001",
"description": "Main launch coaster — Galaxy Land",
"externalId": "GC-001",
"parent": "GALAXY_LAND_LOCATION_ID",
"timeZone": "America/New_York"
}'
Note: Field names shown are illustrative. Refer to the official Location endpoint reference for the exact request schema, including required vs. optional fields and accepted value types.
Common gotchas
Watch out for these issues, which account for most "my import didn't work the way I expected" tickets:
Parents must exist before children — sort your source data by hierarchy depth and import top-down. A reference to a non-existent parent will fail.
External IDs are your idempotency anchor — without them, re-running an import creates duplicate Locations. With them, you can detect existing records and switch to
PUTinstead ofPOST.Time zones matter for operational data — a Location with the wrong (or missing) time zone will produce skewed downtime totals and confused Schedule windows. Set the time zone correctly on import rather than fixing it after operational data has accumulated.
Rate limiting — the API throttles requests under high load. Watch for
429and503responses; back off and retry. Sequential imports with a small delay (e.g., 100ms between requests) usually avoid this.Deletions cascade in surprising ways — deleting a Location can orphan or invalidate Schedules, Assignments, and Assets attached to it. Always test deletion logic in a non-production environment first; for production cleanup, log every delete to a separate file.
Custom properties have to exist before you populate them — if your import payload includes custom Location properties, those property definitions must already be configured in Mobaro. Add the property definitions through the Backend (or via the appropriate API endpoint) before the import.
CSV imports — currently CSM-assisted
Mobaro can import Locations from a CSV, but the CSV import flow is currently handled by the Mobaro support team and Customer Success Managers rather than being self-serve in the UI. This is intentional — CSV imports are run as part of onboarding or major restructures where a CSM is already engaged, and the support team validates the data shape before processing.
If you want a CSV-based import, the path is:
Reach out to your CSM or email [email protected] with a description of the import (how many Locations, the source system, and the rough timeline).
The team will share the expected CSV column structure for the import.
You provide the cleaned CSV; Mobaro processes it.
A reconciliation pass confirms the tree and properties imported correctly.
Best practice: For one-time imports during onboarding or a major restructure, CSV through CSM/support is usually faster than building an API script. For ongoing sync from an upstream system, the API is the right path — it's repeatable, scriptable, and can run on whatever cadence your source data changes.
Best practices
Build for idempotency from day one
A re-runnable import script is one that can be safely re-executed without creating duplicates or unintended changes. The pattern: for each source record, check by External ID whether the Location already exists; POST if not, PUT if it does. This makes the import a sync rather than a one-shot operation.
Test with a representative subset first
Before importing thousands of Locations, run the script against ten or twenty records that include the trickiest cases (deepest hierarchies, edge-case names, populated custom properties). If those work, the rest will.
Use a dry-run mode
Add a flag to your script that processes the source data and logs what would be imported, without actually issuing POSTs. Reviewing the dry-run output catches malformed records, missing parents, and time-zone mistakes before any data lands in Mobaro.
Test in a non-production environment when possible
If you have access to a staging or sandbox Mobaro environment, validate large imports there before running against production. If not, scope your initial production import to a single region or park first.
Best practice: Treat your Location tree like infrastructure. Keep your source data and import scripts in version control. Tag commits before each major import. If something goes wrong, you can reconstruct the prior state and fix forward without scrambling.
See also
Getting started with the Mobaro API — auth, headers, and your first request.
Creating and Managing API Keys — generate the key your import script will use.
Understanding API access scopes and limitations — what your key can and can't do.
Mobaro API parameter reference — pagination, filtering, and date filters.
Understanding IDs in Mobaro — for the difference between Mobaro IDs and External IDs.
Bulk imports and API access for Assets — for the parallel Asset import workflow, which often runs alongside a Location import.
How operating hours affect downtime tracking — for why time zone on Locations matters.
Frequently asked questions
Q: Can I do a CSV import myself through the UI?
A: Not currently. CSV imports of Locations are handled by Mobaro support and CSMs, typically as part of onboarding or a major restructure. For self-serve imports, the API is the path. Reach out to your CSM or [email protected] if a CSV import is the right fit for your situation.
Q: Can I import Locations in parallel for speed?
A: Sequential imports are recommended. Parallel posting can hit rate limits and creates ordering problems for hierarchies — a child can race ahead of its parent. For a few thousand Locations, sequential imports with a small delay are fast enough and far less error-prone.
Q: What happens if I post the same Location twice?
A: You get two Locations with the same name. Mobaro doesn't enforce name uniqueness on Locations. Use External IDs and check before each POST to avoid duplicates.
Q: Can the API set the parent Location?
A: Yes. The parent reference is part of the Location payload at create time, and can be updated via PUT later.
Q: How do I get the IDs of existing Locations?
A: Call GET /api/customers/locations with appropriate filters. Each Location in the response includes its Mobaro ID and any External ID you've set. Build a lookup map in your script as you process the response.
Q: Can I use Power Automate instead of writing a script?
A: Yes for many workflows. The Mobaro Power Automate connector covers common operations and is great for low/no-code automation. For complex bulk imports — especially with hierarchy resolution — a custom script is usually faster and easier to debug.
Q: Can I bulk delete Locations via the API?
A: There's no batch delete endpoint. Deletes are issued one Location at a time via DELETE /api/customers/locations/{id}. Be deliberate — Location deletes can cascade into Schedules, Assignments, Assets, and reporting, and there's no undo.
Q: Does the API support Location Groups too?
A: Yes, via parallel endpoints under /api/customers/locationgroups. Refer to the API docs for the exact shape; the workflow is similar to Locations themselves.
