SQL Workbench/J implements a set of SQL commands. These commands
can be used like any standard SQL command (such as UPDATE
inside SQL Workbench/J, i.e. inside the editor or as part of a SQL script
that is run through SQL Workbench/J in batch mode.
As those commands are implemented by SQL Workbench/J you will not be able to
use them when running your SQL scripts using a different client program
(e.g. psql
, SQL*Plus or phpmyadmin
.
Creates an XML report of selected tables. This report could be used to generate an HTML documentation of the database (e.g. using the XSLT command). This report can also be generated from within the Database Object Explorer
The resulting XML file can be transformed into a HTML documentation of your database schema.
Sample stylesheets can be downloaded from http://www.sql-workbench.net/xstl.html.
If you have XSLT stylsheets that you would like to share, please send them to
<support@sql-workbench.net>
.
Using this command you can reverse engineer an existing database. The XML file can then be used to generate a HTML documentation of the database or to be transformed into a format that is supported by your design tool.
To see table and column comments with an Oracle database, you need to enable remarks reporting for the JDBC driver in order to see the comments.
The command supports the following parameters:
Parameter | Description |
---|---|
-file | The filename of the output file. |
-tables | A (comma separated) list of tables to report. Default is
all tables. If this parameter is specified -schemas is ignored.
If you want to generate the report on tables from different users/schemas you have
to use fully qualified names in the list (e.g. -tables=MY_USER.TABLE1,OTHER_USER.TABLE2 )
You can also specify wildcards in the table name: -table=CONTRACT_% will create
an XML report for all tables that start with CONTRACT_
|
-schemas | A (comma separated) list of schemas to generate the report from.
For each user/schema all tables are included in the report. e.g.
-schemas=MY_USER,OTHER_USER would generate a report
for all tables in the schemas MY_USER and OTHER_USER .
|
-namespace | The namespace to be used for the XML tags. By default no
namespace is used. If you supply a value for this e.g. wb the tag <schema-report>
would be written as <wb:schema-report>
|
-includeTables | Control the output of table information for the report. The default is
true . Valid values are true , false .
|
-includeTableGrants | If tables are included in the output, the grants for each table can also be included with
this parameter. The default value is false .
|
-includeProcedures | Control the output of stored procedure information for the report. The default is
false . Valid values are true , false .
|
-includeSequences | Control the output of sequence information for the report. The default is
false . Valid values are true , false .
|
-reportTitle |
Defines the title for the generated XML file. The specified title is written
into the tag <report-title> and can be used when
transforming the XML e.g. into a HTML file.
|
The WbSchemaDiff
analyzes two schemas (or a list of tables)
and outputs the differences between those schemas as an XML file. The XML file
describes the changes that need to be applied to the target schema to have
the same structure as the reference schema, e.g. modify column definitions,
remove or add tables, remove or add indexes.
The output is intended to be transformed using XSLT (e.g. with the XSLT Command). Sample XSLT transformations can be found on the SQL Workbench/J homepage
The command supports the following parameters:
Parameter | Description |
---|---|
-referenceProfile | The name of the connection profile for the reference connection. If this is not specified, then the current connection is used. |
-referenceGroup | If the name of your reference profile is not unique across all profiles, you will need to specify the group in which the profile is located with this parameter. |
-targetProfile |
The name of the connection profile for the target connection (the one that needs to be migrated). If this is not specified, then the current connection is used.
If you use the current connection for reference and target,
then you should prefix the table names with schema/user or
use the |
-targetGroup | If the name of your target profile is not unique across all profiles, you will need to specify the group in which the profile is located with this parameter. |
-file | The filename of the output file. If this is not supplied the output will be written to the message area |
-referenceTables | A (comma separated) list of tables that are the reference tables, to be checked. |
-targetTables |
A (comma separated) list of tables in the target
connection to be compared to the source tables. The tables
are "matched" by their position in the list. The first table in the
If you omit this parameter, then all tables from the
target connection with the same names as those listed in
If you omit both parameters, then all tables that the user can access are retrieved from the source connection and compared to the tables with the same name in the target connection. |
-referenceSchema | Compare all tables from the specified schema (user) |
-targetSchema | A schema in the target connection to be compared to the tables from the reference schema. |
-namespace | The namespace to be used for the XML tags. By default no
namespace is used. If you supply a value for this e.g. wb the tag <schema-report>
would be written as <wb:modify-table>
|
-encoding | The encoding to be used for the XML file. The default is UTF-8 |
-includePrimaryKeys | Select whether primary key constraint definitions should be compared as well.
The default is true .
Valid values are true or false .
|
-includeForeignKeys | Select whether foreign key constraint definitions should be compared as well.
The default is true .
Valid values are true or false .
|
-includeTableGrants | Select whether table grants should be compared as well.
The default is false .
|
-includeConstraints |
Select whether table and column (check) constraints should be compared as well. SQL Workbench/J compares the constraint definition (SQL) as stored in the database. When comparing schemas from different DBMS systems this will not return the desired results.
The default is to not compare table constraints ( |
-includeViews |
Select whether views should also be compared. When comparing
views, the source as it is stored in the DBMS is compared. This comparison
is case-sensitiv, which means
The default is |
-includeProcedures |
Select whether stored procedures should also be compared. When comparing procedures the source as it is stored in the DBMS is compared. This comparison is case-sensitiv. A comparison across different DBMS will also not work!
The default is |
-includeIndex |
Select whether indexes should be compared as well. The default
is to not compare index definitions.
Valid values are true or false .
|
-includeSequences |
Select whether sequences should be compared as well. The default is
to not compare sequences. Valid values are true , false .
|
-useJdbcTypes |
Define whether to compare the DBMS specific data types, or
the JDBC data type returned by the driver. When comparing
tables from two different DBMS it is recommended to use
Valid values are |
The WbDataDiff
command can be used to generate SQL scripts
that update a target database such that the data is identical to a reference
database. This is similar to the WbSchemaDiff
but compares
the actual data in the tables rather than the table structure.
For each table the command will create up to three script files, depending on
the needed statements to migrate the data. One file for UPDATE
statements,
one file for INSERT
statements and one file for DELETE
statements (if -includeDelete=true
is specified)
![]() | |
As this command needs to read every row from the reference and the target
table, processing large tables can take quite some time, especially if |
WbDataDiff
requires that all involved tables have a primary key
defined. If a table does not have a primary key, WbDataDiff
will
stop the processing.
To improve performance (a bit), the rows are retrieved in chunks from the
target table by dynamically constructing a WHERE clause for the rows
that were retrieved from the reference table. The chunk size
can be controlled using the property workbench.sql.sync.chunksize
The chunk size defaults to 25. This is a conservative setting to avoid
problems with long SQL statements when processing tables that have
a PK with multiple columns. If you know that your primary keys
consist only of a single column and the values won't be too long, you
can increase the chunk size, possibly increasing the performace when
generating the SQL statements. As most DBMS have a limit on the length
of a single SQL statement, be careful when setting the chunksize too high.
The same chunk size is applied when generating DELETE
statements by the WbCopy
command,
when syncDelete mode is enabled.
The command supports the following parameters:
Parameter | Description |
---|---|
-referenceProfile | The name of the connection profile for the reference connection. If this is not specified, then the current connection is used. |
-referenceGroup | If the name of your reference profile is not unique across all profiles, you will need to specify the group in which the profile is located with this parameter. If the profile's name is unique you can omit this parameter |
-targetProfile |
The name of the connection profile for the target connection (the one that needs to be migrated). If this is not specified, then the current connection is used.
If you use the current connection for reference and target,
then you should prefix the table names with schema/user or
use the |
-targetGroup | If the name of your target profile is not unique across all profiles, you will need to specify the group in which the profile is located with this parameter. |
-file |
The filename of the main script file. The command creates two
scripts per table. One script named update_<tablename>.sql
that contains all needed UPDATE or INSERT
statements. The second script is named delete_<tablename>.sql
and will contain all DELETE statements for the target table.
The main script merely calls (using WbInclude)
the generated scripts for each table.
|
-referenceTables |
A (comma separated) list of tables that are the reference
tables, to be checked. You can specify the table with wildcards,
e.g. -referenceTables=P% to compare all tables
that start with the letter P .
|
-targetTables |
A (comma separated) list of tables in the target
connection to be compared to the source tables. The tables
are "matched" by their position in the list. The first table in the
If you omit this parameter, then all tables from the
target connection with the same names as those listed in
If you omit both parameters, then all tables that the user can access are retrieved from the source connection and compared to the tables with the same name in the target connection. |
-referenceSchema | Compare all tables from the specified schema (user) |
-targetSchema | A schema in the target connection to be compared to the tables from the reference schema. |
-checkDependencies |
Valid values are Sorts the generated scripts in order to respect foreign key dependencies for deleting and inserting rows.
The default is |
-includeDelete |
Valid values are
Generates
The default is |
-encoding |
The encoding to be used for the SQL scripts. The default depends
on your operating system. It will be displayed when you run
WbDataDiff without any parameters. You can overwrite
the platform default with the property workbench.encoding
in the file workbench.settings
|
-sqlDateLiterals |
Valid values: Controls the format in which the values of DATE, TIME and TIMESTAMP columns are written into the generated SQL statements. For a detailed description of the possible values, please refer to the WbExport command. |
-ignoreColumns |
With this parameter you can define a list of column names that should not be considered when comparing data. You can e.g. exclude columns that store the last access time of a row, or the last update time if that should not be taken into account when checking for changes. |
-showProgress |
Valid values: true, false, <numeric value>
Control the update frequence in the statusbar (when running in
GUI mode). The default is every 10th row is reported. To disable
the display of the progress specifiy a value of 0 (zero) or the
value |
Compare all tables between two connections, and write the output to the
file migrate_staging.sql
, but do not generate
DELETE
statements.
WbDataDiff -referenceProfile="Production" -targetProfile="Staging" -file=migrate_staging.sql -includeDelete=false
Compare a list of matching tables between two databases and write the output to the
file migrate_staging.sql
including DELETE
statements.
WbDataDiff -referenceProfile="Production" -targetProfile="Staging" -referenceTables=person,address,person_address -file=migrate_staging.sql -includeDelete=true
Compare three tables that are differently named in the target database and
ignore all columns (regardless in which table they appear) that are named
LAST_ACCESS
or LAST_UPDATE
WbDataDiff -referenceProfile="Production" -targetProfile="Staging" -referenceTables=person,address,person_address -targetTables=t_person,t_address,t_person_address -ignoreColumns=last_access,last_update -file=migrate_staging.sql -includeDelete=true
Transforms an XML file via a XSLT stylesheet. This can be used to format XML input files into the correct format for SQL Workbench/J or to transform the output files that are generated by the various SQL Workbench/J commands.
Parameters for the XSLT command:
Parameter | Description |
---|---|
-inputfile | The name of the XML source file. |
-xsltoutput | The name of the generated output file. |
-stylesheet | The name of the XSLT stylesheet to be used. |
This defines an internal variable which is used for variable substitution during SQL execution. Details can be found in the chapter Variable substitution.
The syntax for defining a variable is: WbVarDef variable=value
The variable definition can also be read from a file. The file should list
each variable definition on one line (this is the format of a normal Java properties
file). Lines beginning with a #
sign are ignored.
The syntax is WBVARDEF -file=<filename>
You can also specify a file when starting SQL Workbench/J with the
parameter -vardef=filename.ext
. When specifying a filename
you can also define an encoding for the file using the -encoding
switch. The specified file has to be a regular Java properties file.
For details see see Reading variables from a file.
This removes an internal variable from the variable list. Details can be found in the chapter Variable substitution.
This list all defined variables from the variable list. Details can be found in the chapter Variable substitution.
The WbConfirm
command pauses the execution of the
current script and displays a message. You can then choose to stop
the script or continue. The message can be supplied as a parameter of
the command. If no message is supplied, a default message is
displayed.
This command can be used to prevent accidental execution of a script even if confirm updates is not enabled.
This command has no effect in batch mode.
With the WbInclude
command you run SQL scripts without
actually loading them into the editor, or call other scripts from within
a script. The format of the command is WbInclude -file=filename;
.
For DBMS other then MS SQL, the command can be abbreviated using the @ sign: @filename;
is equivalent to WbInclude -file=filename;
.
The called script way may also include other scripts. Relative filens (e.g. as parameters
for SQL Workbench/J commands) in the script are always resolved to the directory
where the script is located, not the current directory of the application.
The reason for excluding MS SQL is, that when creating stored procedures in MS SQL, the procedure
parameters are identified using the @ sign, thus SQL Workbench/J would interpret the lines
with the variable definition as the WbInclude command. If you want to use the @ command
with MS SQL, you can configure this in your
workbench.settings
configuration file.
![]() | |
If the included SQL script contains |
The long version of the command accepts additional parameters. When using the long version, the filename needs to be passed as a parameter as well.
Only files up to a certain size will be read into memory. Files exceeding this size will be processes statement by statement. In this case the automatic detection of the alternate delimiter will not work. If your scripts exceed the maximum size and do use the alternate delimiter you will have to use the "long" version so that you can specify the actual delimiter used in your script.
The command supports the following parameters:
Parameter | Description |
---|---|
-file | The filename of the file to be included. |
-continueOnError |
Defines the behaviour if an error occurs in one of the statements.
If this is set to true then script execution will continue
even if one statement fails. If set to false script execution
will be halted on the first error. The default value is false
|
-delimiter |
Specify the delimiter that is used in the script. This defaults
to ; . If you want to define a delimiter that
will only be recognized when beeing on a single line, append ;nl
to the value, e.g.: -delimiter=/;nl
|
-encoding | Specify the encoding of the input file. If no encoding is specified, the default encoding for the current platform (operating system) is used. |
-verbose | Controls the logging level of the executed commands. -verbose=true
has the same effect as adding a WbFeedback on inside
the called script. -verbose=false has the same effect as adding
a WbFeedback off inside of the called script. |
Execute my_script.sql
@my_script.sql;
Execute my_script.sql
but abort on the first error
wbinclude -file="my_script.sql" -continueOnError=false;
To be able to directly edit data in the result set (grid) SQL Workbench/J needs
a primary key on the underlying table. In some cases these primary keys are not present or
cannot be retrieved from the database (e.g. when using updateable views).
To still be able to automatically update a result based on those tables (without always
manually defining the primary key) you can manually define a primary
key using the WbDefinePk
command.
Assuming you have an updateable view called v_person
where
the primary key is the column person_id
. When you simply do a
SELECT * FROM v_person
, SQL Workbench/J will prompt you for the
primary key when you try to save changes to the data. If you run
WbDefinePk v_person=person_id
before retrieving the result, SQL Workbench/J will automatically
use the person_id
as the primary key (just as if this
information had been retrieved from the database).
To delete a definition simply call the command with an empty column list:
WbDefinePk v_person=
If you want to define certain mappings permanently, this can be done using a mapping file that is specified in the configuration file. The file specified has to be a text file with each line containing one primary key definition in the same format as passed to this command. The global mapping will automatically be saved when you exit the application if a filename has been defined. If no file is defined, then all PK mappings that you define are lost when exiting the application (unless you explicitely save them using WbSavePkMap
v_person=person_id v_data=id1,id2
will define a primary key for the view v_person
and one for
the view v_data
. The definitions stored in that file can
be overwritten using the WbDefinePk
command, but those changes
won't be saved to the file. This file will be read for all database connections and
is not profile specific. If you have conflicting primary key definitions for
different databases, you'll need to execute the WbDefinePk
command
each time, rather then specifying the keys in the mapping file.
When you define the key columns for a table through the GUI, you have the option
to remember the defined mapping. If this option is checked, then that mapping
will be added to the global map (just as if you had executed WbDefinePk
manually.
![]() | |
The mappings will be stored with lowercase table names internally, regardless how you specify them. |
To view the currently defined primary keys, execute the command
WbListPkDef
.
To load the additional primary key definitions from a file, you can
use the the WbLoadPKMap
command. If a filename is defined
in the configuration file then that
file is loaded. Alternatively if no file is configured, or if you want to
load a different file, you can specify the filename using the -file
parameter.
To save the current primary key definitions to a file, you can
use the the WbSavePKMap
command. If a filename is defined
in the configuration file then the
definition is stored in that file. Alternatively if no file is configured, or if you want to
store the current mapping into a different file, you can specify the filename
using the -file
parameter.
To save the contents of a BLOB
or CLOB
column
into an external file the WbSelectBlob
command can be used. Most DBMS
support reading of CLOB
(character data) columns directly, so depending
on your DBMS (and JDBC driver) this command might only be needed for binary data.
The syntax is very similar to the regular SELECT
statement, an additional
INTO
keyword specifies the name of the external file into which the
data should be written:
WbSelectBlob blob_column INTO c:/temp/image.bmp FROM theTable WHERE id=42;
Even if you specify more then one column in the column list, SQL Workbench/J will only use the first column. If the SELECT returns more then one row, then one outputfile will be created for each row. Additional files will be created with a counter indicating the row number from the result. In the above example, image.bmp, image_1.bmp, image_3.bmp and so on, would be created. If you want to export addtional columns together with the BLOB contents, please use the WbExport command.
![]() | |
You can fully manipulate (save, view, upload) the contents of BLOB columns in a result set. Please refer to BLOB support for details. |
This command enables the DBMS_OUTPUT
package when connected to
an Oracle database. On other systems this command does nothing. After
the DBMS_OUTPUT
package is enabled, any message written with
dbms_output.put_line() are displayed in the message pane after
executing a SQL statement. It is equivalent to calling the dbms_output.enable() procedure.
You can control the buffer size of the DBMS_OUTPUT
package by passing the
desired buffer size as a parameter to the ENABLEOUT
command:
ENABLEOUT 32000;
![]() | |
Due to a bug in Oracle's JDBC driver, you cannot retrieve columns with
the |
This disables the DBMS_OUTPUT
package for an Oracle database.
This is equivalent to calling dbms_output.disable() procedure.
Normally SQL Workbench/J prints the results for each statement
into the message panel. As this feedback can slow down the execution
of large scripts, you can disable the feedback using the WbFeedback
command. When WbFeedback OFF
is executed, only a summary of the
number of executed statements will be displayed, once the script execution has
finished. This is the same behaviour as selecting "Consolidate script log" in the
options window. The only difference is, that the setting through WbFeedback
is temporary and does not affect the global setting.
The SET
command is available to enable you to run SQL scripts that are
designed to run with Oracle's SQL*Plus utility inside SQL Workbench/J as well.
Most of the parameters of the SET
are only valid inside
SQL*Plus, and thus for Oracle any error message resulting from executing a SET
command will only be logged as a warning. For all other DBMS the command is passed
directly to the server, except for the parameters described in this chapter (because
they have an equivalent JDBC call that will be executed instead).
SET feedback ON/OFF
is equivalent to the WbFeedback
command, but mimics the syntax of Oracle's SQL*Plus utility.
SET serveroutput on
is equivalent to the ENABLEOUT
command and SET serveroutput off
is equivalent to DISABLEOUT command.
With the command SET autocommit ON/OFF
autocommit can be turned on or
off for the current connection. This is equivalent to setting the autocommit property
in the connection profile or toggling
the state of the
→
menu item.
Describe shows the definition of the given table. It can be abbreviated with DESC. The command expects the table name as a parameter.
DESC person;
If you want to show the structure of a table from a different user, you need to prefix the table name with the desired user
DESCRIBE otheruser.person;
This command lists all available tables (including views and synonyms). This output is equivalent to the left part of the Database Object Explorer's Table tab.
This command will list all stored procedures available to the current user. The output of this command is equivalent to the Database Explorer's Procedure tab.
Lists the available catalogs or databases. The output of this command depends on the underlying JDBC driver and DBMS. For MS SQL Server this lists the available databases (which then could be changed by USE <dbname>)
For Oracle this command returns nothing (as Oracle does not implement the concept of catalogs)
This command calls the JDBC driver's getCatalogs()
method and will
return its result. If on your database system this command does not display
a list, it is most likely that your DBMS does not support catalogs (e.g. Oracle)
or the driver does not implement this feature.