Tom Lee-Gough



Netsuite API Response

Netsuite provides a pretty good API. It's really useful for automating processes and is not too complex to get working. I am going to cover off how the data is is retrieved from Netsuite.

Saved searches and SuiteQL responses are slightly different. It is also more complex to create SuiteQL. SuiteQL offers more flexibility, but does not hold your hand at all. SuiteQL only asks if it is possible, not if you should do something. Saved Searches are nicer in that it takes care of certain problems, like foreign exchange, which is a pain to do manually.

Calling some SuiteQL in Python is really easy once you have authentication setup:

def get_suiteql_results(suiteql_query, limit, offset):

    url = "https://{{ACCOUNT}}.suitetalk.api.netsuite.com/services/rest/query/v1/suiteql?limit={}&offset={}".format(
        limit, offset
    )

    headers = {
        'Authorization': 'Bearer '+ {{BEARER_TOKEN}},
        'Content-Type': 'application/json',
        'Cookie': 'NS_ROUTING_VERSION=LAGGING',
        'Prefer': 'transient'
    }

    response = requests.request(
        'POST',
        url,
        headers=headers,
        json = {
            'q': suiteql_query
        }
    ).json()

    return response

All Netsuite responses are limited to 1000 results. To retrieve a different block of 1000 results, you need to move on the offset. A typical response looks like this:

{
    "links": [
        {
            "rel": "next",
            "href": "https://{{ACCOUNT}}.suitetalk.api.netsuite.com/services/rest/query/v1/suiteql?limit=1000&offset=1000"
        },
        {
            "rel": "last",
            "href": "https://{{ACCOUNT}}.suitetalk.api.netsuite.com/services/rest/query/v1/suiteql?limit=1000&offset=14000"
        },
        {
            "rel": "self",
            "href": "https://{{ACCOUNT}}.suitetalk.api.netsuite.com/services/rest/query/v1/suiteql?limit=1000&offset=0"
        }
    ],
    "count": 1000,
    "hasMore": true,
    "items:" [
        {
            item_dict_1
        },
        {
            item_dict_2
        },
        {
            item_dict_3
        }
    ]
    "offset": 0,
    "totalResults": 15000
}

Calling all of the data in a large set is pretty trivial using a while loop on response['hasMore'], just remember to move the offset on your URL.

One thing to bear in mind with this process is that depending on your query can take a long time. It is better to craft a select that chooses only the data that you need. I am attempting to use the API to retireve the transactionLine table. Each request is ~3mb and the whole table is ~1m lines long. I'm having to make ~1,000 requests to get the whole table. My script keeps keeping caught up and failing at different points. It's also hard to know where the bottleneck(s) are. I think that it is in the request itself. I did previously (mistakenly) have the code retireving new credentials with each request, but now have restructured to only get them when they have expired. This seems to have make some improvements.