Hello! I'm writing a python script that will import all the excel files in a folder and write them into a SQL Server table. The script runs if I just point it to a single excel file, but i'm stumped at setting up an iteration loop that will read through all the excel files in a folder. I need to insert some type of loop inbetween the "if file_to_import.endswith('.XLS'):" & the "column_count=10" lines. Any ideas or suggestions would be very much appreciated!

# Import arcpy module
from xlrd import open_workbook ,cellname
import arcpy
import pyodbc as p
import os

# Database Connection Info
server = "Server"
database = "DB"
connStr = ('DRIVER={SQL Server Native Client 10.0};SERVER=' + server + ';DATABASE=' + database + ';' + 'Trusted_Connection=yes')

# Assign path to Excel files
folder_to_import = '\\\\Location\\DATA'
l_files_to_import = os.listdir(folder_to_import)
for file_to_import in l_files_to_import:
    if file_to_import.endswith('.XLS'):

            column_count=10

# Open entire workbook
book = open_workbook(file_to_import)

# Use first sheet
sheet = book.sheet_by_index(0)

# Open connection to SQL Server Table
conn = p.connect(connStr)

# Get cursor
cursor = conn.cursor()

# Assign the query string without values once, outside the loop
query = "INSERT INTO HED_EMPLOYEE_DATA (Company, Contact, Email, Name, Address, City, CentralCities, EnterpriseZones, NEZ, CDBG) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)"

# Iterate through each row

for row_index in range(1, sheet.nrows):

    row_num          = row_index
    Company          = sheet.cell(row_index,0).value
    Contact          = sheet.cell(row_index,1).value
    Email            = sheet.cell(row_index,2).value
    Name             = sheet.cell(row_index,3).value
    Address          = sheet.cell(row_index,4).value
    City             = sheet.cell(row_index,5).value
    CentralCities    = sheet.cell(row_index,6).value
    EnterpriseZones  = sheet.cell(row_index,7).value
    NEZ              = sheet.cell(row_index,8).value
    CDBG             = sheet.cell(row_index,9).value

    values = (Company, Contact, Email, Name, Address, City, CentralCities, EnterpriseZones, NEZ, CDBG)

    cursor.execute(query, values)

# Close cursor
cursor.close()

# Commit transaction
conn.commit()

# Close SQL server connection
conn.close()

My indentation was wrong in my original code above, which I fixed to put everything into the existing for loop.

However I am now getting the error: "IOError: [Errno 2] No such file or directory: 'Report_Test.XLS'"

The file Report_Test.XLS does exist in the location it's trying to pull from and the fact that it's reading the file name then saying it doesn't exist is very confusing.

Any ideas about this error message?

Always use complete names. The program is not looking in the directory that the file is located in. Also, you should be able to use a slash, /directory_name on any OS.

##-------------------------------------------------------------
## Assumes this code is now indented to run under the for() loop
##-------------------------------------------------------------
folder_to_import = '/Location/DATA'
l_files_to_import = os.listdir(folder_to_import)
for file_to_import in l_files_to_import:
    if file_to_import.endswith('.XLS'):

            column_count=10

        # Open entire workbook
        book = open_workbook(os.path.join(folder_to_import, file_to_import))

Thank you woooee! Now the problem i'm getting is this error below, which is strange!:

Traceback (most recent call last):
  File "C:\J\Projects\Script\Import.py", line 49, in <module>
    book = open_workbook(os.path.join(folder_to_import, file_to_import))
  File "C:\Python26\ArcGIS10.0\lib\xlrd\__init__.py", line 435, in open_workbook
    ragged_rows=ragged_rows,
  File "C:\Python26\ArcGIS10.0\lib\xlrd\book.py", line 116, in open_workbook_xls
    bk.parse_globals()
  File "C:\Python26\ArcGIS10.0\lib\xlrd\book.py", line 1208, in parse_globals
    self.handle_writeaccess(data)
  File "C:\Python26\ArcGIS10.0\lib\xlrd\book.py", line 1173, in handle_writeaccess
    strg = unpack_unicode(data, 0, lenlen=2)
  File "C:\Python26\ArcGIS10.0\lib\xlrd\biffh.py", line 303, in unpack_unicode
    strg = unicode(rawstrg, 'utf_16_le')
  File "C:\Python26\ArcGIS10.0\lib\encodings\utf_16_le.py", line 16, in decode
    return codecs.utf_16_le_decode(input, errors, True)
UnicodeDecodeError: 'utf16' codec can't decode byte 0x20 in position 108: truncated data

UnicodeDecodeError: 'utf16' codec can't decode byte 0x20 in position 108: truncated data

I assume your are using Python 2.x so try something like this.
book = open_workbook(os.path.join(folder_to_import, file_to_import), coding='utf-16')

If you have unicode file names then use
l_files_to_import = os.listdir(u"/Location/DATA")

Hey woooeee, I tried both of those suggestions and I got back that it did not recognize 'coding' or the 'u'. So i'm not sure what's happening. When these excel files where created they were exported from a workflow as bytes so i'm thinking that has something to do with it.

As an aside, I noticed that you have posted to both here ande DevShed with the same problem. FYI, cross-posting is generally frowned upon, as it leads to duplication of effort. You should only post to one forum at a time, and only re-post elsewhere if you think you have exhausted the possibilities there, and if you do cross-post, always give a link to the other fora so that everyone can see what was done before.

I'm sorry, I didn't realize you couldn't post the same question on different websites. It seemed the best way to get a variety of answers from multiple sources. This is my first time using forums, so now I know that going forward.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.