I am currently attempting to work on converting a fairly sizeable JSON object and convert it into a CSV format. However, when I attempt to do so, using a conventional approach (that seems to work with other files). I am presented with a "ValueError: too many values to unpack"
I have tried to flatten the JSON object using this function:
# def flatten(d, parent_key=''):
# items = []
# for k, v in d.items():
# try:
# items.extend(flatten(v, '%s%s_' % (parent_key, k)).items())
# except AttributeError:
# items.append(('%s%s' % (parent_key, k), v))
# return dict(items)
However this concatenates the keys. I am now attempting to remove the nested key and reassigning to the outer key.
Here are some samples. I would remove Int32, Double and DateTime. I am wondering if there is a function that would then allow me to assign the new keys as column headers in a CSV and concatenate all of the values within the list (as corresponding fields). I hope that I was clear in my description. Thank you all for your help.
"FC": {"Int32": ["0","0","0","0","0","0"]}
and
"PBA": {"Double": ["0","0","0","0","0","0","0","0"]}
and such examples:
"PBDD": { "DateTime": ["1/1/0001 12:00:00 AM", "1/1/0001 12:00:00 AM","1/1/0001 12:00:00 AM","1/1/0001 12:00:00 AM","1/1/0001 12:00:00 AM", "1/1/0001 12:00:00 AM", "1/1/0001 12:00:00 AM", "1/1/0001 12:00:00 AM"] },