The WbExport
command can be used to export the contents
of the database into external files. Several types of output files are supported.
The WbExport
command can either be used from within the
GUI like any other command (such as UPDATE
or INSERT
),
or it can be used as part of a SQL script that is run in
batch mode.
The WbExport
command exports either the result of the
next SQL statement (which has to produce a result set) or
the content of the table(s) specified with the -sourceTable
parameter.
The data is directly written to the output file and not loaded into memory. The export file(s)
can be compressed ("zipped") on the fly. WbImport can
import the zipped (text or XML) files directly without the need to unzip them.
If you want to save the data that is currently displayed in the result area into an external file, please use the Save Data as feature. You can also use the Database Explorer to export multiple tables.
![]() | |
When using a |
You can also export the result of a SELECT
statement, by
selecting the statment in the editor, and then choose
→ .
When exporting data into a Text or XML file, the content of BLOB columns
is written into separate files. One file for each column of each row. Text files
that are created this way can most probably only be imported using SQL Workbench/J as
the main file will contain the filename of the BLOB data file instead of the actual BLOB data.
The only other application that I know of, that can handle this type of imports is Oracle's
SQL*Loader
utility. If you run the text export together with the
parameter -writeoracleloader=true
the control file will contain the
approriate definitions to read the BLOB data from the external file.
To be able to generate the binary Microsoft Excel format, you need to download
the POI library from the Apache project's website: http://poi.apache.org/.
Any version greater than 2.5 is fine. The distribution contains a jar file named
e.g. poi-3.0.1-FINAL-20070705.jar
(exact name depends on the version).
This jar file needs to be copied into the same directory as sqlworkbench.jar
under
the name poi.jar
.
If an export to an Excel spreadsheet is needed, it is recommended to use the Excel XML format, as that doesn't need an external library (just like exporting to the OpenDocument format).
The command supports the following parameters:
Parameter | Description | |||
---|---|---|---|---|
-type |
Possible values:
Defines the type of the output file.
In order for this to work properly the table needs to have keycolumns defined,
or you have to define the keycolumns manually using the
| |||
-createDir | If this parameter is set to true, SQL Workbench/J will create any needed directories when creating the output file. | |||
-sourcetable |
Defines a list of tables to be exported. If this
switch is used, If you want to export tables from a different user
or schema you can use a schema name combined with a wildcard
e.g. | |||
-outputDir | When using the -sourcetable switch
with multiple tables, this parameter is mandatory and defines
the directory where the generated files should be stored.
| |||
-continueOnError | When exporting more than one table, this parameter controls whether the whole export will be terminated if an error occurs during export of one of the tables. | |||
-encoding | Defines the encoding in which the file should be
written. Common encodings are ISO-8859-1, ISO-8859-15, UTF-8 (or UTF8).
To get a list of available encodings, execut WbExport
with the parameter -showencoding
| |||
-showEncodings | Displays the encodings supported by your Java version and operating system. If this parameter is present, all other parameters are ignored. | |||
-lineEnding |
Possible values are:
Defines the line ending to be used for XML or text files.
The default line ending used depends on the platform where SQL Workbench/J is running. | |||
-header |
Possible values: If this parameter is set to true, the header (i.e. the column names) are placed into the first line of output file. The default is to not create a header line. You can define the default value for this parameter in the file workbench.settings. This parameter is valid for text and spreadsheet (OpenDocument, Excel) exports. | |||
-compress |
Selects whether the output file should be compressed
and put into a ZIP archive. An archive will be created with the name of the specified outputfile
but with the extension
When exporting multiple tables using the | |||
-clobAsFile |
Possible values: For SQL, XML and Text export this controls how the contents of CLOB fields are exported. Usually the CLOB content is put directly into the output file When generating SQL scripts with WbExport this can be a problem as not all DBMS can cope with long character literals (e.g. Oracle has a limit of 4000 bytes). When this parameter is set to true, SQL Workbench/J will create one file for each CLOB column value. This is the same behaviour as with BLOB columns.
Text files that are created with this parameter set to true, will
contain the filename of the generated output file instead of the
actual column value. When importing such a file using
All CLOB files that are written using the encoding specified with the
| |||
-lobIdCols |
When exporting CLOB or BLOB columns as external files, the filename with the
LOB content is generated using the row and column number for the currently
exported LOB column (e.g. data_r15_c4.data). If you prefer to have the value
of a unique column combination as part of the file name, you can specify
those columns using the | |||
-extensionColumn |
When exporting CLOB or BLOB columns as external files, the extension of the generated filenames can be defined based on a column of the result set. If the exported table contains more than one type of BLOBs (e.g. JPEG, GIF, PDF) and your table stores the information to define the extension based on the contents, this can be used to re-generate proper filenames. This parameter only makes sense if exactly one BLOB column is exported. | |||
-filenameColumn |
When exporting CLOB or BLOB columns as external files, the complete filename can be taken from a column of the result set (instead of dynamically creating a new file based on the row and column numbers). This parameter only makes sense if exactly one BLOB column is exported. | |||
-showProgress |
Valid values: true, false, <numeric value>
Control the update frequence in the statusbar (when running in
GUI mode). The default is every 10th row is reported. To disable
the display of the progress specifiy a value of 0 (zero) or the
value |
Parameter | Description |
---|---|
-delimiter | The given string sequence will be
placed between two columns. The default is a tab character
(-delimiter=\t
|
-dateFormat | The date format to be used when writing date columns into the output file. |
-timestampFormat | The format to be used when writing datetime (or timestamp) columns into the output file. |
-quoteChar |
The character (or sequence of characters) to be used
to enclose text (character) data if the delimiter is
contained in the data. By default quoting is disabled until a quote character
is defined. To set the double quote as the quote character
you have to enclose it in single quotes: |
-quoteCharEscaping |
Possible values: Defines how quote characters that appear in the actual data are written to the output file. If no quote character has been defined using the -quoteChar switch, this option is ignored.
If
If |
-quoteAlways |
Possible values:
If quoting is enabled (via |
-decimal | The decimal symbol to be used for numbers. The default is a dot (e.g. 3.14152) |
-escapeText |
This parameter controls the escaping of non-printable
or non-ASCII characters. Valid options are
This will write a unicode representation of the character into the text file e.g. \n for a newline, \u00F6 for ö. This file can only be imported using SQL Workbench/J (at least I don't know of any DBMS specific loader that will decode this properly) If character escaping is enabled, then the quote character will be escaped inside quoted values and the delimiter will be escaped inside non-quoted values. The delimiter could also be escaped inside a quoted value if the delimiter falls into the selected escape range (e.g. a tab character). |
-formatFile |
Possible values:
This parameter controls the creation of a control file for
the bulk load utilities of Oracle and Microsoft SQL Server.
You can specify both formats. In that case two control files will be created. Note that the generated control file will most probably need some adjustments before you can actually use it. |
Parameter | Description |
---|---|
-table | The given tablename will be put into the <table> tag as an attribute. |
-dateFormat | The date format to be used when writing date columns into the output file. |
-timestampFormat | The format to be used when writing datetime (or timestamp) columns into the output file. |
-decimal | The decimal symbol to be used for numbers. The default is a dot (e.g. 3.14152) |
-useCDATA |
Possible values:
Normally any character data written into the xml file will
be processed to escape XML characters (e.g. < will be written as <).
If you don't want that escaping, set
With
With
|
-stylesheet | The name of the XSLT stylesheet that should be used to transform the SQL Workbench/J specific XML file into a different format. If -stylesheet is specified, -xsltoutput has to be specified as well. |
-xsltOutput | The resulting output file (specified with the -file parameter), can be transformed using XSLT after the export has finished. This parameter then defines the name of the outputfile of the transformation. |
-verboseXML |
Possible values: This parameter controls the tags that are used in the XML file and minor formatting features. The default is -verboseXML=true and this will generate more readable tags and formatting. However the overhead imposed by this is quite high. Using -verboseXML=false uses shorter tag names (not longer then two characters) and does put more information in one line. This output is a harder to read for a human but is smaller in size which could be important for exports with large result sets. |
Parameter | Description | |||
---|---|---|---|---|
-table | Define the tablename to be used for the UPDATE or INSERT statements. This parameter is required if the SELECT statement has multiple tables in the FROM list. table. | |||
-charfunc |
If this parameter is given, any non-printable character in a text/character column will be replaced with a call to the given function with the ASCII value as the parameter. If -charfunc=chr is given (e.g. for an Oracle syntax), a CR (=13) inside a character column will be replaced with:
This setting will affect ASCII values from 0 to 31 | |||
-concat |
If the parameter -charfunc is used
SQL Workbench/J will concatenate the individual pieces using
the ANSI SQL operator for string concatenation. In case
your DBMS does not support the ANSI standard (e.g. MS ACCESS)
you can specify the operator to be used: -concat=+
defines the plus sign as the concatenation operator. | |||
-blobType |
Possible values: This parameter controls how BLOB data will be put into the generated SQL statements. By default no conversion will be done, so the actual value that is written to the output file depends on the JDBC driver's implementation of the Blob interface.
The parameter value
The parameter value
The parameter value | |||
-sqlDateLiterals |
Possible values: This parameter controls the generation of date or timestamp literals. By default literals that are specific for the current DBMS are created. You can also choose to create literals that comply with the JDBC specification or ANSI SQL literals for dates and timestamps.
You can define the default literal format to be used for the WbExport command in the options dialog. | |||
-commitEvery |
A numeric value which identifies
the number of -commitevery=100 will create a
If this is not specified one | |||
-createTable |
Possible values:
If this parameter is set to true, the necessary | |||
-keyColumns |
A comma separated list of column names that occur in the table
or result set that should be used as the key columns for If the table does not have key columns, or the source SELECT statement uses a join over several tables, or you do not want to use the key columns defined in the database, this key can be used to define the key columns to be used for the UPDATE statements. This key overrides any key columns defined on the base table of the SELECT statement. |
Parameter | Description |
---|---|
-pageTitle | The name to be used for the worksheet |
The WbExport
command supports compressing
of the generated output files. This includes the "main" export file and any associated LOB
files.
When using WbImport
you can import
the data stored in the archives without unpacking them. Simply specify the archive name
with the -file
parameter. SQL Workbench/J will detect that the input file
is an archive and will extract the information "on the fly". Assume the following export
command:
WbExport -type=text -file=/home/data/person.txt -compress=true -sourcetable=person;
This command will create the file /home/data/person.zip
that will contain the
specified person.txt
. To import this export into the table employee, you can
use the following command:
WbImport -type=text -file=/home/data/person.zip -table=employee;
Assuming the PERSON
table had a BLOB colum (e.g. a picture of the person),
the WbExport
command would have created an additional file called
person_blobs.zip
that would contain all BLOB data. The WbImport
command will automatically read the BLOB data from that archive.
WbExport -type=text -file='c:/data/data.txt' -delimiter='|' -decimal=','; SELECT * FROM data_table;
Will create a text file with the data from data_table
.
Each column will be separated with the character | Each fractional number
will be written with a comma as the decimal separator. As the
SELECT statement retrieves all rows and columns from the table, this
could also be written as:
WbExport -type=text -file='c:/data/data.txt' -delimiter='|' -decimal=',' -sourcetable=data_table;
To export all tables from the current connection into tab-separated files and compress the files, you can use the following statement:
WbExport -type=text -outputDir=c:/data/export -compress=true -sourcetable=*;
This will create one zip file for each table containing the exported data as a text file. If a table contains BLOB columns, the blob data will be written into a separate zip file.
The files created by the above statement can be imported into another database using the following command:
WbImport -type=text -sourceDir=c:/data/export -checkDependencies=true;
To generate a file that contains INSERT
statements that can be
executed on the target system, the following command can be used:
WbExport -type=sqlinsert -file='c:/data/newtable.sql' -table=newtable; SELECT * FROM table1, table2 WHERE table1.column1 = table2.column1;
will create a SQL scripts which inserts the data from table1
and table2 into a table called newtable. If the parameter -table is
omitted, the creation of SQL INSERT
statements is only possible,
if the SELECT is based on a single table (or view).
There is a difference in the behaviour of the command regarding the "Max. Rows" setting
in the GUI. When you use the WbExport
command together with
a SELECT
query, the Max. Rows setting will be respected
by the SELECT
statement (and you will see a warning
that the result set was limited). When you use the WbExport
with the -sourcetable
switch, the "Max. Rows" setting
will not be respected, and all rows from the table will be written into
the specified file.