No RDBMS or ETL app has a magic bullet. Getting data in is the big challenge. Some RDBMS have bulk input for CSV or the like.
One pitfall is that a big insert may trigger long and many row/page/table locks. One middle path is to import in bulk to a temporary table, a staging table, and use row count limits to apply just N rows at a time. Or you can just limit the batch size before any import. Locks may not concern a reporting or batch database, but are critical in a transactional database.
Once I compacted a huge table with clustered indices after a 90% delete this way, but I slept after each mini-batch as long as it took to apply so my DB used was under 50%, and the interactive users never noticed a thing. It took a few days to compact the whole table: copying to a batch table, deleting, reinserting, deleting the batch. The related indices also compacted! I recall setting N to the DRBMS page size in bytes, 2048, to use whole pages in each pass.
On one project, I opened a flat file in a UDF so, with a stored procedure supplying empty rows, I could call related UDFs to fetch fields in an insert-select. In another project, we used java in Oracle to map data like a table in a query for insert-select, which in Oracle can insert to many tables. My one insert-select for 5 tables was 4k long and terrified …