Hi,
I am trying to use the DataReader Source to import a table from a PostgresSQL database into a new table in SQL 2005 database. It works for all tables except one, which has over 80,000 records with long text columns. When I limit the import to fraction of records (3,000 to 4,000 records) it works fine but when I try to get all it generates the following errors:
Source: DataReader using ADO.NET and ODBC driver to access PostgresSQL table
Destination: OLE DB Destination - new table in SQL 2005
(BTW - successful import with DTS packagein SQL 2000)
Errors
Error: 0x80070050 at Import File, DTS.Pipeline: The file exists.Error: 0xC0048019 at Import File, DTS.Pipeline: The buffer manager could not get a temporary file name. The call to GetTempFileName failed.
Error: 0xC0048013 at Import File, DTS.Pipeline: The buffer manager could not create a temporary file on the path "C:\Documents and Settings\michaelsh\Local Settings\Temp". The path will not be considered for temporary storage again.
Error: 0xC0047070 at Import File, DTS.Pipeline: The buffer manager cannot create a file to spool a long object on the directories named in the BLOBTempStoragePath property. Either an incorrect file name was provided, or there are no permissions.
Error: 0xC0209029 at Import File, DataReader Source - Articles [1]: The "component "DataReader Source - Articles" (1)" failed because error code 0x80004005 occurred, and the error row disposition on "output column "probsumm" (1639)" specifies failure on error. An error occurred on the specified object of the specified component.
Error: 0xC02090F5 at Import File, DataReader Source - Articles [1]: The component "DataReader Source - Articles" (1) was unable to process the data.
Error: 0xC0047038 at Import File, DTS.Pipeline: The PrimeOutput method on component "DataReader Source - Articles" (1) returned error code 0xC02090F5. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing.
End
Any idea why it can't create a temp file or why it complains about the "The File exists", which file, where, etc. Any help or alternative suggestions are greatly appreciated. What I am missing or doing wrong here?
Best,
Michael Sh
I am definitely receiving the exact same error, with the exact same specifications- everything is working fine until I hit 19,932 - it sounds like it is some kind of error (I looked in the folder it was referencing) where there are too many temporary files- basically the directory gets mega huge with a ton of temporary filenames - and i guess it runs out of temporary file names to use -I think that is what stems the "The File Exists" error.
Right now, I am trying to see if I can just do a data conversion from a text string into a varchar(8000) field - i'm just going to truncate the column if needed -
Have you made any progress?
-robert
|||After exploring many differenet options, the only way I could get it to work on the original machine (desktop with 2 processor and 2 GB RAM) was to create multiple dataflow tasks, limit each to about 5,000 records then connect them to each other. Not a pretty solution but once working, I could eliminate data in records as culprit.
Ironically once I've installed SQL 2005 on my laptop, the same exact package worked without a hitch on first run going through 80,000 + records. Both machines run on XP Pro, the only difference is my laptop is single processor using slightly older version of Postgres ODBC driver, which I can't locate any more to see if that is the culprit or not.
Best,
Michael Sh
|||My first guess is that the pipeline is trying to create a file to spool the long columns due to memory limits, and it is unable to create that either because of permissions, or because it does not know where to put it.
Do you have file write permissions where this package is running?
What is the enivironment TEMP or TMP variable value set to?
Thanks
Mark
|||You can tell that the temp directory is located at:
C:\Documents and Settings\<username>\Local Settings\Temp
Basically what I did, was I emptied out this folder, and then ran my package- as soon as it started running- it started adding thousands of files named "DTS####.tmp"
It seems that after some point in time it stopped and said it could not find another filename to use because it was already created.
I'm sure we have write permissions to this folder - it just gets filled. To me it is a flaw of SSIS -
I did a test and I created a similar package in DTS - it ran quickly and effortlessly.
|||Thanks for the extra information.
Could you share exactly how many files are in the temp directory at the point of failure?
Aslo, approximately how many columns do you have in the table, and how many of those are LOB columns?
Thanks
Mark
|||I re-viewed the data files- and I found that they are formatted as such:
DTS####.tmp - where # is a hexadecimal character -
I noticed the files were as such:
DTSAAA0.TMP
DTSAAA1.TMP
DTSAAA2.TMP
DTSAAA3.TMP
.....
DTSAAAA.TMP
DTSAAAB.TMP
...
DTSAAAF.TMP
DTSAAB0.TMP
...
and so on- an so on- so you can see that there would definitely be a limit to this numbering scheme.
In my dataset, all I have are about 8 columns, only one of which is a TEXT field.
I have about 77,000 records - but the data in the TEXT column is quite large sometimes. (> 8000 characters)
-rob
|||Yes, it does sound like you are running out of temp files. Now we need to figure out why so many are created.
At this point, i recommend that you go here:http://msdn.microsoft.com/sql/bi/integration/ and choose the MSDN Product Feedback link under Support, and select Report a Bug. This is likely something that will need to be reproduced and investigated by the development team.
Thanks
Mark
A teammate suggested something that might help you work around this. You can have additional temp paths by setting the BLOBTempStoragePath to a semi-colon separated list of paths. This way, you will be able to create more unique temporary files.
Mark
|||Great find - I actually took a different route in creating a temporary fix and created a DTS package (*gasp!*) to just jam the data into my table ...
But next time if it ever happens to me again, I will give it a try!
|||Even after creating the semi colon delimited set in Temporrayblobtsorage it gives me the same error
Package Package