- Article
- 7 minutes to read
Applies to: SQL Server (all supported versions)
Azure SQL Database
Azure SQL Managed Instance
This article provides an overview of how to use the Transact-SQL BULK INSERT statement and the INSERT...SELECT * FROM OPENROWSET(BULK...) statement to bulk import data from a data file into a SQL Server or Azure SQL Database table. This article also describes security considerations for using BULK INSERT and OPENROWSET(BULK...), and using these methods to bulk import from a remote data source.
Note
When you use BULK INSERT or OPENROWSET(BULK...), it is important to understand how SQL Server version handles impersonation. For more information, see "Security Considerations," later in this topic.
BULK INSERT statement
BULK INSERT loads data from a data file into a table. This functionality is similar to that provided by the in option of the bcp command; however, the data file is read by the SQL Server process. For a description of the BULK INSERT syntax, see BULK INSERT (Transact-SQL).
BULK INSERT examples
- BULK INSERT (Transact-SQL)
- Examples of Bulk Import and Export of XML Documents (SQL Server)
- Keep Identity Values When Bulk Importing Data (SQL Server)
- Keep Nulls or Use Default Values During Bulk Import (SQL Server)
- Specify Field and Row Terminators (SQL Server)
- Use a Format File to Bulk Import Data (SQL Server)
- Use Character Format to Import or Export Data (SQL Server)
- Use Native Format to Import or Export Data (SQL Server)
- Use Unicode Character Format to Import or Export Data (SQL Server)
- Use Unicode Native Format to Import or Export Data (SQL Server)
- Use a Format File to Skip a Table Column (SQL Server)
- Use a Format File to Map Table Columns to Data-File Fields (SQL Server)
OPENROWSET(BULK...) Function
The OPENROWSET bulk rowset provider is accessed by calling the OPENROWSET function and specifying the BULK option. The OPENROWSET(BULK...) function allows you to access remote data by connecting to a remote data source, such as a data file, through an OLE DB provider.
To bulk import data, call OPENROWSET(BULK...) from a SELECT...FROM clause within an INSERT statement. The basic syntax for bulk importing data is:
INSERT ... SELECT * FROM OPENROWSET(BULK...)
When used in an INSERT statement, OPENROWSET(BULK...) supports table hints. In addition to the regular table hints, such as TABLOCK, the BULK clause can accept the following specialized table hints: IGNORE_CONSTRAINTS (ignores only the CHECK constraints), IGNORE_TRIGGERS, KEEPDEFAULTS, and KEEPIDENTITY. For more information, see Table Hints (Transact-SQL).
For information about additional uses of the BULK option, see OPENROWSET (Transact-SQL).
INSERT...SELECT * FROM OPENROWSET(BULK...) statements - examples:
- Examples of Bulk Import and Export of XML Documents (SQL Server)
- Keep Identity Values When Bulk Importing Data (SQL Server)
- Keep Nulls or Use Default Values During Bulk Import (SQL Server)
- Use a Format File to Bulk Import Data (SQL Server)
- Use Character Format to Import or Export Data (SQL Server)
- Use a Format File to Skip a Table Column (SQL Server)
- Use a Format File to Skip a Data Field (SQL Server)
- Use a Format File to Map Table Columns to Data-File Fields (SQL Server)
Security considerations
If a user uses a SQL Server login, the security profile of the SQL Server process account is used. A login using SQL Server authentication cannot be authenticated outside of the Database Engine. Therefore, when a BULK INSERT command is initiated by a login using SQL Server authentication, the connection to the data is made using the security context of the SQL Server process account (the account used by the SQL Server Database Engine service).
To successfully read the source data you must grant the account used by the SQL Server Database Engine, access to the source data. In contrast, if a SQL Server user logs on by using Windows Authentication, the user can read only those files that can be accessed by the user account, regardless of the security profile of the SQL Server process.
For example, consider a user who logged in to an instance of SQL Server by using Windows Authentication. For the user to be able to use BULK INSERT or OPENROWSET to import data from a data file into a SQL Server table, the user account requires read access to the data file. With access to the data file, the user can import data from the file into a table even if the SQL Server process does not have permission to access the file. The user does not have to grant file-access permission to the SQL Server process.
SQL Server and Microsoft Windows can be configured to enable an instance of SQL Server to connect to another instance of SQL Server by forwarding the credentials of an authenticated Windows user. This arrangement is known as impersonation or delegation. Understanding how SQL Server version handle security for user impersonation is important when you use BULK INSERT or OPENROWSET. User impersonation allows the data file to reside on a different computer than either the SQL Server process or the user. For example, if a user on Computer_A has access to a data file on Computer_B, and the delegation of credentials has been set appropriately, the user can connect to an instance of SQL Server that is running on Computer_C, access the data file on Computer_B, and bulk import data from that file into a table on Computer_C.
Bulk importing to SQL Server from a remote data file
To use BULK INSERT or INSERT...SELECT * FROM OPENROWSET(BULK...) to bulk import data from another computer, the data file must be shared between the two computers. To specify a shared data file, use its universal naming convention (UNC) name, which takes the general form, \\Servername\Sharename\Path\Filename. Additionally, the account used to access the data file must have the permissions that are required for reading the file on the remote disk.
For example, the following BULK INSERT
statement bulk imports data into the SalesOrderDetail
table of the AdventureWorks
database from a data file that is named newdata.txt
. This data file resides in a shared folder named \dailyorders
on a network share directory named salesforce
on a system named computer2
.
BULK INSERT AdventureWorks2012.Sales.SalesOrderDetail FROM '\\computer2\salesforce\dailyorders\neworders.txt';
Note
This restriction does not apply to the bcp utility because the client reads the file independently of SQL Server.
Bulk importing from Azure Blob storage
When importing from Azure Blob storage and the data is not public (anonymous access), create a DATABASE SCOPED CREDENTIAL based on a SAS key which is encrypted with a MASTER KEY, and then create an external database source for use in your BULK INSERT command.
Note
Do not use explicit transaction, or you receive a 4861 error.
Using BULK INSERT
The following example shows how to use the BULK INSERT command to load data from a csv file in an Azure Blob storage location on which you have created a SAS key. The Azure Blob storage location is configured as an external data source. This requires a database scoped credential using a shared access signature that is encrypted using a master key in the user database.
--> Optional - a MASTER KEY is not required if a DATABASE SCOPED CREDENTIAL is not required because the blob is configured for public (anonymous) access!CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'YourStrongPassword1';GO--> Optional - a DATABASE SCOPED CREDENTIAL is not required because the blob is configured for public (anonymous) access!CREATE DATABASE SCOPED CREDENTIAL MyAzureBlobStorageCredential WITH IDENTITY = 'SHARED ACCESS SIGNATURE', SECRET = '******srt=sco&sp=rwac&se=2017-02-01T00:55:34Z&st=2016-12-29T16:55:34Z***************'; -- NOTE: Make sure that you don't have a leading ? in SAS token, and -- that you have at least read permission on the object that should be loaded srt=o&sp=r, and -- that expiration period is valid (all dates are in UTC time)CREATE EXTERNAL DATA SOURCE MyAzureBlobStorageWITH ( TYPE = BLOB_STORAGE, LOCATION = 'https://****************.blob.core.windows.net/invoices' , CREDENTIAL= MyAzureBlobStorageCredential --> CREDENTIAL is not required if a blob is configured for public (anonymous) access!);BULK INSERT Sales.InvoicesFROM 'inv-2017-12-08.csv'WITH (DATA_SOURCE = 'MyAzureBlobStorage');
Important
Azure SQL Database does not support reading from Windows files.
Using OPENROWSET
The following example shows how to use the OPENROWSET command to load data from a csv file in an Azure Blob storage location on which you have created a SAS key. The Azure Blob storage location is configured as an external data source. This requires a database scoped credential using a shared access signature that is encrypted using a master key in the user database.
--> Optional - a MASTER KEY is not required if a DATABASE SCOPED CREDENTIAL is not required because the blob is configured for public (anonymous) access!CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'YourStrongPassword1';GO--> Optional - a DATABASE SCOPED CREDENTIAL is not required because the blob is configured for public (anonymous) access!CREATE DATABASE SCOPED CREDENTIAL MyAzureBlobStorageCredential WITH IDENTITY = 'SHARED ACCESS SIGNATURE', SECRET = '******srt=sco&sp=rwac&se=2017-02-01T00:55:34Z&st=2016-12-29T16:55:34Z***************'; -- NOTE: Make sure that you don't have a leading ? in SAS token, and -- that you have at least read permission on the object that should be loaded srt=o&sp=r, and -- that expiration period is valid (all dates are in UTC time)CREATE EXTERNAL DATA SOURCE MyAzureBlobStorageWITH ( TYPE = BLOB_STORAGE, LOCATION = 'https://****************.blob.core.windows.net/invoices' , CREDENTIAL= MyAzureBlobStorageCredential --> CREDENTIAL is not required if a blob is configured for public (anonymous) access!);INSERT INTO achievements with (TABLOCK) (id, description)SELECT*FROMOPENROWSET(BULK'csv/achievements.csv',DATA_SOURCE='MyAzureBlobStorage',FORMAT='CSV',FORMATFILE='csv/achievements-c.xml',FORMATFILE_DATA_SOURCE='MyAzureBlobStorage')ASDataFile;
Important
Azure SQL Database does not support reading from Windows files.
See also
- INSERT (Transact-SQL)
- SELECT Clause (Transact-SQL)
- Bulk Import and Export of Data (SQL Server)
- OPENROWSET (Transact-SQL)
- SELECT (Transact-SQL)
- FROM (Transact-SQL)
- bcp Utility
- BULK INSERT (Transact-SQL)
FAQs
Is bulk insert faster than insert SQL? ›
In case of BULK INSERT, only extent allocations are logged instead of the actual data being inserted. This will provide much better performance than INSERT. The actual advantage, is to reduce the amount of data being logged in the transaction log.
Is bulk insert faster than BCP? ›BCP is faster in most cases then BULK Insert.
What is the difference between insert and bulk insert? ›¶ Both 'Bulk insert with batch size' and 'Use single record insert' options are used for inserting records in a database table. The 'Bulk insert with batch size' option is used when you want the whole dataset to be loaded in batches of a specified size. Typically, larger batch sizes result in better transfer speeds.
How to load bulk data in SQL Server? ›SELECT * FROM OPENROWSET(BULK...) statement to bulk import data from a data file into a SQL Server or Azure SQL Database table.
When should I use bulk insert? ›Use a bulk insert operation with a SELECT clause for high-performance data insertion. Use the INSERT and CREATE TABLE AS commands when you need to move data or a subset of data from one table into another.
Which is more efficient to insert data into a table load data or insert? ›LOAD DATA (all forms) is more efficient than INSERT because it loads rows in bulk.
How do I get millions of records in SQL table faster? ›- Avoid auto-increment primary key. Most of the systems today not only target a single region, but it could also be a global market. ...
- Avoid joining table records (left join, outer join, inner join, etc) ...
- Don't use SQL lock. ...
- Avoid aggregation functions. ...
- Try to use SQL function only with a single record query.
Bulk insert allows us to import the CSV file and insert all the data from the file. The Bulk insert also has the advantage of loading the data “BATCHSIZE” wise. While loading the data if we want to abort the insert process if we get an error we have a parameter called “MAXERRORS”.
How can I make bulk insert faster in SQL Server? ›- Using TABLOCK as query hint.
- Dropping Indexes during Bulk Load operation and then once it is completed then recreating them.
- Changing the Recovery model of database to be BULK_LOGGED during the load operation.
In order to execute BULK INSERT, the user must be granted ADMINISTER BULK OPERATIONS permission. This can be done either by granting the permission directly to the user or by adding the user to the bulkadmin role.
Which is faster SELECT or insert? ›
SELECT'. Because the 'INSERT … SELECT' inserts data into an existing table, it is slower and requires more resources due to the higher number of logical reads and greater transaction log usage. However, providing the query hint to lock the entire destination table, the two statements perform exactly the same.
Which is faster merge or insert? ›The basic set-up data is as follows. We've purposely set up our source table so that the INSERTs it will do when merged with the target are interleaved with existing records for the first 500,000 rows. These indicate that MERGE took about 28% more CPU and 29% more elapsed time than the equivalent INSERT/UPDATE.
What is Openrowset in SQL Server? ›The OPENROWSET(BULK...) function allows you to access files in Azure Storage. OPENROWSET function reads content of a remote data source (for example file) and returns the content as a set of rows.
Which method is used for bulk import and export of data in SQL? ›Method | Description |
---|---|
bcp utility | A command-line utility (Bcp.exe) that bulk exports and bulk imports data and generates format files. |
BULK INSERT statement | A Transact-SQL statement that imports data directly from a data file into a database table or nonpartitioned view. |
- Syntax :
- Example – A table named student must have values inserted into it. It has to be done as follows:
- Output –
- Output –
- insert multiple rows : A table can store upto 1000 rows in one insert statement. ...
- Syntax :
- Example – Consider a table student. ...
- Output –
To optimize insert speed, combine many small operations into a single large operation. Ideally, you make a single connection, send the data for many new rows at once, and delay all index updates and consistency checking until the very end.
What happens when bulk insert fails? ›If an error arises while executing one of the operations (for example, if bulk insert fails to insert 1 out of 100 rows), a single exception is thrown. It may happen that while some statements in the bulk operation can be successfully executed, some may result in an error.
Does partitioning improve insert performance? ›For insert statements, the Oracle optimizer uses the values provided for the partition key to determine where to put the rows. So there will be probably no significant performance improvement.
Why is load data faster than insert? ›The reason why is pretty simple: LOAD DATA INFILE bothers MySQL less than INSERT statements do: for example, the LOAD DATA INFILE statement can skip lines, columns, or, if we wish, load data only into specific columns, skipping all others (refer to the example above.)
Which is more efficient to insert data into a table in SQL? ›Correct Option: C. In MySQL, the 'LOAD DATA' in all forms is more efficient than the 'INSERT' because it loads rows in bulk.
Which is faster insert or UPDATE SQL Server? ›
INSERT is faster than UPDATE because UPDATE does what INSERT does and before that it finds the record(s) and marks them deletion.
How do you handle a large amount of data in SQL Server? ›The most recommended and best option is to have a STANDBY server, restore the backup of the production database on that server, and then run the DBCC command. If the consistency checks run ok on the standby database, the production database should be ok as it is the source of the standby.
How do I optimize SQL query for large data? ›- Tip 1: Add missing indexes. ...
- Tip 2: Check for unused indexes. ...
- Tip 3: Avoid using multiple OR in the FILTER predicate. ...
- Tip 4: Use wildcards at the end of a phrase only. ...
- Tip 5: Avoid too many JOINs.
- Example: Comparing Tables.
- Step 1: Establish whether SQL code still runs long with literal valued parameters.
- Step 2: Review the execution plan and apply suggested indexes.
- Step 3: Isolate the bottleneck.
- Step 4: Modify the code.
- Conclusion.
A Bulk insert is a process or method provided by a database management system to load multiple rows of data into a database table.
Is bulk insert a transaction? ›BulkInsert doesn't create a transaction by default. If you want to save multiple lists, you will need to handle the transaction in your code.
Is bulk insert a single transaction? ›The bulk insert operation is broken into batches, each batch is treated in its own transaction so the whole operation isn't treated under a single transaction.
What is batch size in bulk insert? ›Batch size is the number of records in each batch. The rows in a batch are sent to the server at the end of each batch. The BatchSize property gets or sets the number of records to use in a batch. The following example save bulk data in batches of 1000 rows. context.BulkSaveChanges(options => options.BatchSize = 1000);
How can I make SQL update and insert faster? ›- Removing index on the column to be updated.
- Executing the update in smaller batches.
- Disabling Delete triggers.
- Replacing Update statement with a Bulk-Insert operation.
3. Inserts are slower against a Heap (no Clustered index) Inserts against a table with no clustered index (heap) are optimized for saving space, rather than performance. This means that SQL Server will spend more time searching for available space than when a clustered index is used.
How does Bulk insert command handle imported identity values? ›
Data files that contain identity values can be bulk imported into an instance of Microsoft SQL Server. By default, the values for the identity column in the data file that is imported are ignored and SQL Server assigns unique values automatically.
How to bulk insert CSV file in SQL Server? ›Run the BULK INSERT utility from SQL Server Management Studio (SSMS). Use the SQL Server Management Studio (SSMS) Import Flat File wizard. Write a program that opens the CSV file, reads its records one-by-one, and calls a SQL INSERT statement to insert the rows into a database table.
How to insert bulk data in SQL Server using stored procedure? ›We have to create a variable of User-defined table type to pass it as a parameter of the procedure. Here the first line is used to declare the variable of table type and the second line is used to insert some dummy records in the table type variable Lastly, we have called the `EXEC` to execute the procedure.
How to improve table performance in SQL Server? ›- Choose Appropriate Data Type. ...
- Avoid nchar and nvarchar. ...
- Avoid NULL in the fixed-length field. ...
- Avoid * in SELECT statement. ...
- Use EXISTS instead of IN. ...
- Avoid Having Clause. ...
- Create Clustered and Non-Clustered Indexes. ...
- Keep clustered index small.
The number of indexes on a table is the most dominant factor for insert performance. The more indexes a table has, the slower the execution becomes.
Which data type is faster in SQL? ›The fastest way to go is an indexed int column. int fields occupy between 2 and 8 bytes, with 4 being usually more than enough ( -2147483648 to +2147483647 ) character types occupy 4 bytes plus the actual strings.
How to improve performance of MERGE statement in SQL Server? ›- Do you absolutely need MERGE? ...
- Create indexes. ...
- Separate filtering from matching. ...
- Use query hints. ...
- Read the Query Plan.
A relational database management system uses SQL MERGE (also called upsert) statements to INSERT new records or UPDATE existing records depending on whether condition matches. It was officially introduced in the SQL:2003 standard, and expanded in the SQL:2008 standard.
What is the difference between append and MERGE? ›In short, when you have one or more columns that you'd like to add to another query, you merge the queries. When you have additional rows of data that you'd like to add to an existing query, you append the query. We will use practical examples to explain merging and appending queries.
How to use Openrowset in SQL? ›The OPENROWSET function can be referenced in the FROM clause of a query as if it were a table name. The OPENROWSET function can also be referenced as the target table of an INSERT , UPDATE , or DELETE statement, subject to the capabilities of the OLE DB provider.
Is BCP faster than bulk insert? ›
BCP is faster in most cases then BULK Insert.
Is Openquery faster? ›The OPENQUERY is faster than the linked server because when we use the linked server, the SQL Server breaks the query into local and remote queries. The local queries are executed on the local server, and remote queries will be sent to the remote server.
How to transfer bulk data from one table to another in SQL Server? ›- Connect to a source database via the Choose a data source step. ...
- Connect to a destination SQL Server database in the Choose a destination step. ...
- Choose the Copy data from one or more tables or views option, In the Specify table copy or query step:
BULK INSERT loads data from a data file into a table. This functionality is similar to that provided by the in option of the bcp command; however, the data file is read by the SQL Server process. For a description of the BULK INSERT syntax, see BULK INSERT (Transact-SQL).
How to insert bulk values into table in SQL? ›INSERT-SELECT-UNION query to insert multiple records
Thus, we can use INSERT-SELECT-UNION query to insert data into multiple rows of the table. The SQL UNION query helps to select all the data that has been enclosed by the SELECT query through the INSERT statement.
You can use the following methods to speed up inserts: If you are inserting many rows from the same client at the same time, use INSERT statements with multiple VALUES lists to insert several rows at a time. This is considerably faster (many times faster in some cases) than using separate single-row INSERT statements.
Is bulk collect faster than cursor? ›Explicit BULK COLLECTs may run a little faster than cursor for loops optimized to return 100 rows with each fetch. BULK COLLECT improves performance of queries that retrieve more than one row.
How do I make my SQL Server insertion faster? ›- Get a row(a) from Table A.
- Set a new row(b) based on a row(a) ex) ID NAME INFO. a 1 Test Data. b 2 Test Data.
- Insert a new data(b) into Table A.
- Loop process 1 to 3.
If a user wants to insert more than 1000 rows, multiple insert statements, bulk insert or derived table must be used.
How do you automatically insert data in SQL? ›Right-click on the database and go to Tasks > Generate Scripts. Select the tables (or objects) that you want to generate the script against. Go to Set scripting options tab and click on the Advanced button. In the General category, go to Type of data to script.