Offline Backup

A “cold backup” of all data
• Why do you want it?
• How is this affecting you?
• Do you currently have a workaround solution? If yes, what is it?
• Can we reach out in future to ask follow up questions about this idea?

1 Like

Being an “old school” person , I want to have a “cold” backup offline for all data , not in the cloud , is this possible ?

I have a some python scripts that download my portfolios and transactions every night and saves them in a database.

It’s pretty easy to do once you’ve had support enable API access for your account.

I was only thinking the same thing the other day. I know this is a highly fault tolerant environment - but I do like to have my own copy of my data too.

However…you just made me realise I can probably use the API along with googlesheets to populate some data I am collecting manually on a daily and monthly basis. Haven’t written any code in a while but this will be my autumn/winter project. Mind is racing with potential uses cases now. Nice!

Would be handy to have a dedicated forum category for API use cases.

It took me a little while to get OAuth working in my script but that was the hardest bit.

I’m happy to share some code snippets from my simple use cases and would be interested in what others are doing with the API also.

And a +1 for an API section in the community.

When I swapped from a spreadsheet to Sharesight some years ago I was concerned that I no longer had a copy of my own data. Sharesight tech support suggested that I download spreadsheets but for a number of reasons that is not a very elegant solution.

I am a Python programmer and the idea of running a script to backup data to say a MySQL database sounds very attractive. Some snippets to get started e.g. the OAuth which is always problematical would be appreciated.

This one will print out all the transactions for each of your portfolios.

Set client_id & client_secret to the correct values after you’ve had the API enabled.

from oauthlib.oauth2 import BackendApplicationClient
from requests_oauthlib import OAuth2Session
import requests
import json

client_id = # From
client_secret = # From


client = BackendApplicationClient(client_id=client_id)
oauth = OAuth2Session(client=client)
token = oauth.fetch_token(token_url=access_token_url, client_id=client_id, client_secret=client_secret)
access_token = token['access_token']
head = {'Authorization': f'Bearer {access_token}'}

# Get the portfolios
r = requests.get('', headers=head)
# print(r)
j = r.json()
# print(j)

for p in j['portfolios']:
    # print(p)
    name = p['name']
    pid = p['id']
    print(f'Retrieving {name} - {pid}')
    vurl = f'{pid}/trades.json'
    r = requests.get(vurl, headers=head)
    rj = r.json()

Im totally tech illiterate (I thought python was a snake ) Is there anyway for a “one click” down load that could be imported into Excel ? Then formatted and stored on a stick ? Should Sharesight have a hack attack and loose our data (or go bankrupt etc) were all stuffed ?

1 Like

The All Trades Report might be what you are after.

It’ll be a manual solution though and you’ll need to do each portfolio seperately changing the date range, and then you can download as a CSV which you can read into Excel.


amazing! thank you for sharing :smiley:

Thanks for this - I tried using the Authorisation Code method inside a flask app but was having some issues - I probably need to use a proper redirect uri rather than the default local version. I ended up using the legacy method with username and password.

I’ve written a script to upload cash transactions (in a csv) into a cash account (10 years worth of cash transactions in some cases). It mostly works, although I do get weird behaviour with the date_time not being consumed properly at the end-point. I’ve observed the same behaviour when adding transactions manually (2021-07-09T10:00:00.000+10:00) - the time and timezone appear to be completely random to the point of altering the date by one day in some cases (in my script at least).

From memory I had to try a couple of oauth libraries before I managed to get it working.

requested API access. installed all the necessary packages . it worked first time - thanks @gavcam !!

Could someone with api access try adding some cash transactions to a cash account and see if they have any issues with the transaction date?

The code below should add 80 transactions - 20 transactions on June 1st, 20 on June 8th, 20 on June 15th and 20 on June 22nd. I’ve used various methods of defining the date format for each batch.

From my tests I often end up with some transactions appearing a day early or late, regardless of the method I use for the date format. You can also see that the response from the PUT call gives variable date format responses (the time and timezone fluctuate).

Interested to see if anyone else has encountered this issue.

from oauthlib.oauth2 import BackendApplicationClient
from requests_oauthlib import OAuth2Session
import requests
from datetime import datetime
import pytz

client_id = ""  # From
client_secret = ""  # From
access_token_url = r""

client = BackendApplicationClient(client_id=client_id)
oauth = OAuth2Session(client=client)
token = oauth.fetch_token(
	token_url=access_token_url, client_id=client_id, client_secret=client_secret
access_token = token["access_token"]
cash_account_id = ""    # cash account id is available at the end of a cash transaction account
						# easiest to make a new cash account (you'll need to be on Investor level or higher)

i = 0
aest = pytz.timezone("Australia/Melbourne")
dt = datetime.strptime("2021-6-1", "%Y-%m-%d")
dt = aest.localize(dt)

date_str_1 = datetime.isoformat(dt, sep="T", timespec="milliseconds")
date_str_2 = "2021-06-08T00:00:00.000+10:00"
date_str_3 = "2021-06-15T00:00:00.000Z"
date_str_4 = "2021-06-22"
	f"date format 1 - {date_str_1}\ndate format 2 - {date_str_2}\n"
	f"date format 2 - {date_str_3}\ndate format 3 - {date_str_4}"

for i in range(0, 80):
	trans_json = {
		"access_token": access_token,
		"cash_account_transaction": {
			"description": "Random Deposit",
			"amount": 20,
			"date_time": date_str_1
			if i < 20
			else date_str_2
			if i < 40
			else date_str_3
			if i < 60
			else date_str_4,
			"type_name": "DEPOSIT",
	r =
	print(i, " - ", r.status_code, " - ", r.text)

  • 1 for API community - for anyone interested I have a python script for scrapping daily prices for all my custom investments and creating new price entries through the Sharesight API.

Just wondering if you keep this data in json or whether you stick it in a .csv file? I’ve managed to compile my trades for all my portfolios into a single .json file using your method but I must admit that I struggle to save it into a readable .csv file. So any tips would be greatly appreciated! thanks


I store the json in a nosql database (mongodb in my case) and then I have python jupyter notebooks that do all the analysis and reporting using papermill to run them each evening.

To write a CSV you could read the json into a pandas dataframe and then use the to_csv() method to create the CSV file with the data you want.


1 Like

I see, thanks. I think I’ll just store the json as I’m struggling to convert it to a pd.DataFrame.

If you can get your json into a string then below is a starter for you.

import pandas as pd
import json

j = '{ "data" : [ { "code" : "BHP", "exchange" : "ASX ", "price" : { "date" : "16102021", "value" : 1.00 }}, {"code" : "CCL", "exchange" : "LSE","price" : { "date" : "16102021", "value" : 2.00 }} ]}'
d = json.loads(j)
df = pd.json_normalize(d, record_path="data", max_level = 5)
df.to_csv("outfile.csv", index=False)
# print(df)

Cheers - I was able to do it by using the start of your code and modifying with:

import csv
import json
from import json_normalize

# Create a new list called rjs using the gather all the data
rjs = []
for p in j['portfolios']:
    # print(p)
    name = p['name']
    pid = p['id']
    print(f'Retrieving {name} - {pid}')
    vurl = f'{pid}/trades.json'
    r = requests.get(vurl, headers=head)
    rj = r.json()

with open('/Users/Filename.json', 'w') as json_file:
        json.dump(rj, json_file)

# Opening JSON file and loading the data
# into the variable data
with open('/Users/Filename.json') as json_file:
    data = json.load(json_file)
result = pd.json_normalize(data, 'trades')

result.to_csv(r'/Users/Filename.csv', index=False)

It might not be the sexiest way to do it, but at least it does what I want! Thanks again for sharing the initial code :slight_smile: Let us know if you use the API for anything else.