Hi

I want to load data from a flat file into a SQL table. The file has a header row and detail rows, all of which need to be loaded. I've tried various suggestions I came across in forums and EndOfStream is the one that loaded all the rows. The problem is that the data gets truncated. Here's a sample of the data:

H,01.00,00001,Header row text                                                                                                                                                                                                                 
D,2000/10/25,1            ,Detail row info 1         ,   1.00,uom    ,000000001,N,      1.70 00  14.000 +0000000015       +6.15      +25.50 +0000000056      +22.96      +95.20 +0000000660     +270.60    +1122.00                           
D,2000/10/25,2            ,Detail row info 2         ,   1.00,uom    ,000000002,N,      4.99 01   0.000 +0000000003       +7.50      +14.97 +0000000008      +20.00      +39.92 +0000000070     +175.00     +459.30

Note that the length of both the header and detail rows is the same, the header just has more trailing spaces. This is the code I'm using read the file and insert the date into a SQL table:

public void Main()
        {
            string FileName = @"C:\TextFile.DAT";

            using (StreamReader sr = new StreamReader(FileName))
            {
                int counter = 0;

                const string connStr = "Data Source=servername;Initial Catalog=sqldatabase;Integrated Security=True;";
                const string query = "Insert Into Table_1 (FileName, DateImported, Imported_Text) Values (@FileName, GETDATE(), @Imported_Text)";

                using (SqlConnection conn = new SqlConnection(connStr))
                {
                    conn.Open();

                    using (SqlCommand cmd = new SqlCommand(query, conn))
                    {

                        cmd.Parameters.Add(new SqlParameter("@FileName", SqlDbType.NVarChar)).Value = FileName;
                        cmd.Parameters.Add(new SqlParameter("@Imported_Text", SqlDbType.NVarChar));

                        while (!sr.EndOfStream)
                        {
                            cmd.Parameters["@Imported_Text"].SqlValue = sr.ReadLine().ToString();
                            cmd.ExecuteNonQuery();

                            counter++;
                        }
                    }
                    conn.Close();
                }
            }

            Dts.TaskResult = (int)ScriptResults.Success;
        }

The header row loads fine but the detail rows only load up to a certain point, with the rows in the sample only loading up to 1.70 and 4.99 respectively. No error is raised so I can't figure out what causes truncation. As a test I deleted the header row, ran the Script Task and the data loaded perfectly. As soon as I add the header row back to the file there is truncation. Could it be the fact that the truncated columns are not delimited like the rest of the columns? I haven't been able to find a solution to the mixed delimeters issue, hence I'm now loading everything into one column for the time being instead of separate columns. If I can just solve the truncation problem I can then work on spltting the data to columns. Any help will be greatly appreciated.

First thing I'd do is separate the read and the setting of the parameter in line 24 so I could put a breakpoint there and see what is actually being read. Also, how many characters are allowed in the table itself?

Splitting is the easy part, btw :)

The table column is nvarchar(max). I separated the read and setting of the parameter, stepped through the code and found that when I mouse over the parameter the entire row is being read. However, when I go to the Text Visualizer on the same parameter the data is truncated. So, it looks like the issue is before the insert into SQL.

Right after you set the parameter in line 24, check what the value of cmd.Parameters["@Imported_Text"].Size is (it's a property). I suspect it is getting set to the size of the first line which is shorter than the other lines. It should be set to equal or greater than your longest line.

I've eliminated the size property as an issue. For some reason when the rows are read backslashes are added, which I think is what causes the truncation. Below is the value of each in debug mode:

"H,01.00,00001,Header row text             \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"
"D,2000/10/25,1            ,Detail row info 1        ,   1.00,uom    ,000000001,N,      1.70\000\0 14.000\0+0000000015\0      +6.15\0     +25.50\0+0000000056\0     +22.96\0     +95.20\0+0000000660\0    +270.60\0   +1122.00\0                          "
"D,2000/10/25,2            ,Detail row info 2        ,   1.00,uom    ,000000002,N,      4.99\001\0  0.000\0+0000000003\0      +7.50\0     +14.97\0+0000000008\0     +20.00\0     +39.92\0+0000000070\0    +175.00\0    +459.30\0                          "

I know that with a normal string like a file path you just add the @ character in front, but since this happens at run-time I haven't found a fix for it yet.

Are you sure that those nulls are not in your source file?
When I run a small test program reading in your sample data I get no truncation or nulls.
I cut/pasted your sample text in to a text file and ran the following code with no problems.

//Collection of custom data type to hold data
        List<FileData> datalist = new List<FileData>();

        // method to read data in to collection
        public void LoadData()
        {
            string FileName = @"TextFile1.txt";

            using (var sr = new StreamReader(FileName))
            {
                int counter = 0;

                while (!sr.EndOfStream)
                {
                    datalist.Add(new FileData(counter++, sr.ReadLine()));
                }
            }
        }

        // class to hold read data
        public class FileData
        {
            public FileData(int id, string text)
            {
                ID = id;
                Text = text;
            }

            public int ID { get; set; }
            public string Text { get; set; }
        }

        // button event to read data and load in to data grid
        private void button1_Click(object sender, EventArgs e)
        {
            LoadData();
            dataGridView1.DataSource = datalist;
        }

OK, I modified my source text file to have nulls where you showed them.
I needed to change my LoadData method to be like this to read in the data.

// method to read data in to collection
        public void LoadData()
        {
            string FileName = @"TextFile2.txt";

            using (var sr = new StreamReader(FileName))
            {
                int counter = 0;

                while (!sr.EndOfStream)
                {
                    var newline = sr.ReadLine();
                    newline = newline.Replace((char)0, (char)0x20);
                    datalist.Add(new FileData(counter++, newline));
                }
            }
        }

Note: I think the issue might just be display of the value (it truncates at the first \0).
However, all the data is probably being read OK.

Thanks, I'll try it tomorrow.

Using the \0 text file without the null to space conversion, I inserted the data in to an SQL Express table.
Using MS SQL Management Studio I inspected the data inserted into the database.

When I do "Select Top 1000 rows" it showed the text as truncated.
However, when I do "Edit Top 200 rows" the text was all there.

I therefore think that the data is present but can not be displayed as the display tools are C based and truncate at the first null.

Thanks Nick, your code solved my truncation problem! You are an absolute Rock star! :) And by the way, never in a million years will I have thought Select and Edit table would give different results!

Thanks Momerath for your help as well.

You guys saved me from dying a slow and painful death trying to sort this out! :)

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.