When executing a statement the contents of the editor is put into an internal buffer together with the information about the text selection and the cursor position. Even when you select a part of the current text and execute that statement, the whole text is stored in the history buffer together with the selection information. When you select and execute different parts of the text and then move through the history you will see the selection change for each history entry.
The previous statement can be recalled by pressing Alt-Left or choosing → statement from the menu. Once the previous statement(s) have been recalled the next statement can be shown using Alt-Right or choosing → from the menu. This is similar to browsing through the history of a web browser.
You can clear the statement history for the current tab, but selecting
→![]() | |
When you clear the content of the editor (e.g. by selecting the whole text and then pressing the Del key) this will not clear the statement history. When you load the associated workspace the next time, the editor will automatically display the last statement from the history. You need to manually clear the statement history, if you want an empty editor the next time you load the workspace. |
There are three different ways to execute a SQL command
Execute the selected text
When you press Ctrl-E or select → the currently selected text will be send to the DBMS for execution. If no text is selected the complete contents of the editor will be send to the database.
When you press Ctrl-Enter or select → the current statement will be executed. The "current" statement will be the text between the next delimiter before the current cursor position and the delimiter after the cursor position.
Example (| indicating the cursor position)
SELECT firstname, lastname FROM person; DELETE FROM person| WHERE lastname = 'Dent'; COMMIT;
When pressing Ctrl-Enter the DELETE
statement will be exectuted
You can configure SQL Workbench/J to automatically jump to the next statement, after executing the current statement. Simply select Options dialog
→ The check mark next to the menu item indicates if this option is enabled. This option can also be changed through theExecute All
If you want to execute the complete text in the editor regardless of the current selection, use the Ctrl-Shift-E or selecting →
command. Either by pressingAs long as at least one statement is running the title of the main window will be prefixed with the » sign. Even if the main window is minimized you can still see if a statement is running by looking at the window title.
You can use variables in your SQL statements that are replaced when the statement is executed. Details on how to use variables can be found in the chapter Variable substitution.
When you run SQL statements that produce a result (such as a SELECT
statement) these results will be displayed in the lower pane of the window, next to
the message panel. For each result that is returned from the server, one tab
(labelled "Result") will be created. If you select and execute three SELECT
statements, the lower pane will show three result tabs and the message tab. If your
statement(s) did not produce any result, only the messages tab will be displayed.
When you run a SQL statement, the current results will be cleared and replaced by the new results. You can turn this off by selecting
→ . Every result that is retrieved while this option is turned on, will be added to the set of result tabs, until you de-select this option. This can also be toggled using the button on the toolbar. Additional result tabs can be closed using →You can also run stored procedures that return result sets. These result will be displayed in the same way. For DBMS's that support mulitple result sets from a single stored procedure (e.g. Microsoft SQL Server), one tab will be displayed for each result returned.
![]() | |
Due to a bug in Oracle's JDBC driver, you cannot retrieve columns with
the |
SQL statements can be entered in the upper part of the window. Please refer to Editing SQL Statements for details on the editing features of SQL Workbench/J.
The tabbed display allows you to keep more than one statement accessible (without needing to use the history functions). Each tab has its own result set and message panel. When you switch to a different statement tab, the result list in the lower part of the window will change to either the last result or the last message of that statement.
For JDBC drivers which do not support multi-threaded execution (e.g. Oracle's JDBC driver), you can configure your Connection Profile so that SQL Workbench/J will open a new connection for each tab.
If you enter more than one statement in the editor and want to execute all statements as a batch script, you need to delimit each statement. The SQL standard for terminating a SQL statement is the semicolon.
You can specifiy an alternate delimiter that can be used instead of the semicolon. See the description of the alternate delimiter for details.
Valid "scripts" are:
UPDATE person SET numheads = 2 WHERE name='Beeblebrox'; COMMIT;
You can run any statement that is valid for the current DBMS. If the statement returns a result set, it will be displayed, otherwise any messages from the server will be displayed in the messages tab. To add or remove editor pages, right click on the tab header and choose
or ). You can re-order the tabs by right clicking on the tab label, then choose or to change the position of the tab.To execute the statement in which the cursor is currently located use Ctrl-Enter. The current statement is defined as the text between the previous SQL delimiter and the next SQL delimiter.
The font that is used for the SQL editor can be defined in the system preferences.
SQL Workbench/J will send the contents of the editor unaltered to the DBMS, so
executing DDL statements (CREATE TABLE
, ...) is possible.
However when executing statements such as CREATE PROCEDURE
which
in turn contain valid SQL statement, delimited with a ; the SQL Workbench/J
will send everything up to the first semicolon to the
backend. In case of a CREATE PROCEDURE
statement this will obviously
result in an error as the statement is not complete.
This is an example of a CREATE PROCEDURE
which will
not work due to the embedded semicolon in
the procedure source itself.
CREATE OR REPLACE FUNCTION proc_sample RETURN INTEGER IS result INTEGER; BEGIN SELECT max(col1) INTO result FROM sometable; RETURN result; END;
When executing this script, Oracle would return an error because SQL Workbench/J will
send everything up to the keyword INTEGER
to the database. Obviously that
fragment would not be correct.
The solution is to terminate the script with a character sequence called the "alternate delimiter". The value of this sequence can be configured in the options dialog as a global default, or per connection profile (so you can use different alternate delimiters for different database systems). The default is the forward slash / defined as a single line delimiter.
If a SQL statement is terminated with the alternate delimiter,
that delimiter is used instead of a semicolon. This way the semicolons embedded in
CREATE PROCEDURE
statements will be sent correctly to the backend
DBMS.
So the solution to the above problem is the following script:
CREATE OR REPLACE FUNCTION proc_sample RETURN INTEGER IS result INTEGER; BEGIN SELECT max(col1) INTO result FROM sometable; RETURN result; END; /
Note the trailing forward slash (/
) at the end in order to "turn on" the
use of the alternate delimiter. If you run scripts with embedded semicolons and you get
an error, please verify the setting for your alternate delimiter.
When is the alternate delimiter used?
As soon as the statement (or script) that you execute is terminated with the alternate delimiter, the alternate delimiter is used to separate the individual SQL statements. When you execute selected text from the editor, be sure to select the alternate delimiter as well, otherwise it will not be recognized (if the alternate delimiter is not selected, the statement to be executed does not end with the alternate delimiter).
![]() | |
You cannot mix the standard semicolon and the alternate delimiter inside one script. |
If you use the alternate delimiter (by terminating the whole script with it), then all statements have to be delimited with it. You cannot mix the use of the normal semicolon and the alternate delimiter for one execution. The following statement (when executed completely) would produce an error message:
SELECT sysdate FROM DUAL; CREATE OR REPLACE FUNCTION proc_sample RETURN INTEGER IS result INTEGER; BEGIN SELECT max(col1) INTO result FROM sometable; RETURN result; END; /
SQL Workbench/J will use the alternate delimiter present, the SELECT
statement
at the beginning will also be sent to the database together with the CREATE
statement. This of course is an invalid statement. You will need to either select and run each statement
individually or change the delimiter after the SELECT
to the alternate delimiter.
SQL Workbench/J supports reading and writing BLOB
(Binary Large OBject)
or CLOB
(Character Large OBject) columns from and to external files.
BLOB clumns are sometimes also referred to as binary data. CLOB columns
are sometimes also referred to as LONG VARCHAR
. The exact data type
depends on the DBMS used.
To insert and update LOB columns the usual INSERT
and
UPDATE
statements can be used by using a special
placeholder to define the source for the LOB data. When updating the
LOB column, a different placeholder for BLOB and CLOB columns has to be used as
the process of reading and sending the data is different for binary and character
data.
![]() | |
When working with Oracle, only the 10g driver supports the standard JDBC calls used by SQL Workbench/J to read and write the LOB data. Earlier drivers will not work as described in this chapter. |
To update a BLOB (or binary) column, use the placeholder
{$blobfile=path_to_file}
in the place where the
actual value has to occur in the INSERT
or UPDATE
statement:
UPDATE theTable SET blob_col = {$blobfile=c:/data/image.bmp} WHERE id=24;
SQL Workbench/J will rewrite the UPDATE statement and send the contents
of the file located in c:/data/image.bmp
to the database. The syntax
for inserting BLOB data is similar. Note that some DBMS might not allow you to
supply a value for the blob column during an insert. In this case you need to
first insert the row without the blob column, then use an UPDATE
to send the blob data. You should make sure to update only one row by specifying an
approriate WHERE
clause.
INSERT INTO theTable (id, blob_col) VALUES (42,{$blobfile=c:/data/image.bmp});
This will create a new record with id=42 and the content of c:/data/image.bmp
in
the column blob_col
The process of updating or inserting CLOB
data is identical to the
process for BLOB
data. The only difference is in the syntax of
the placeholder used to specify the source file. Firstly, the placeholder has
to start with {$clobfile=
and can optionally contain
a parameter to define the encoding of the source file.
UPDATE theTable SET clob_col = {$clobfile=c:/data/manual.html encoding=utf8} WHERE id=42;
If you ommit the encoding parameter, SQL Workbench/J will leave the data conversion
to the JDBC driver (technically, it will use the PreapredStatement.setAsciiStream()
method
whereas with an encoding it will use the PreparedStatement.setCharacterStream()
method).
![]() | |
The format of the |
To save the data stored in a BLOB column, the command WbSelectBlob
can be used. The syntax of this command is similar to the regular SELECT
command
except that a target file has to be specified where the read data should be stored.
You can also use the WbExport command to export data. The contents of the BLOB columns will be saved into separate files. This works for both export formats (XML and Text).
When the result of your SELECT
query contains BLOB columns,
they will be displayed as (BLOB)
together with a button.
When you click on the button a dialog will be displayed allowing
you to save the data to a file, view the data as text (using the selected encoding),
display the blob as an image or display a hex view of the blob.
When displaying the BLOB content as a text, you can edit the text. When saving the data, the entered text will be converted to raw data using the selected encoding.
The window will also let you open the contents of the BLOB data with a predefined external tool. The tools that are defined in the options dialog can be selected from a dropdown. To open the BLOB content with one of the tools, select the tool from the dropdown list, then click on the button next to the external tools dropdown. SQL Workbench/J will then retrieve the BLOB data from the server, store it in a temporary file on your harddisk, and run the selected application, passing the temporary file as a parameter.
From within this information dialog, you can also upload a file to be stored in that BLOB column. The file contents will not be sent to the database server until you actually save the changes to your result set (this is the same for all changes you make directly in the result set, for details please refer to Editing the data)
![]() | |
When using the upload function in the BLOB info dialog, SQL Workbench/J will use the file content for any subsequent display of the binary data or the the size information in the information dialog. You will need to re-retrieve the data, in order to use the blob data from the server. |
There are some configuration settings that affect the performance of SQL Workbench/J. On slow computers it is recommended to turn off the usage of the animated icon as the indicator for a running statement.
When running large scripts, the feedback which statement is executed can also slow down the execution. It is recommended to either turn off the feedback using WBFEEDBACK OFF or by consolidating the script log
When running imports or
exports it is recommended to turn
off the progress display in the statusbar that shows the current row
that is imported/exported because this will slow down the process as
well. In both cases you can use -showProgress
to turn off the display (or set it to a high number such as 1000) in
order to reduce the overhead caused by updating the screen.
SQL Workbench/J offers so called SQL macros, or abbreviations. You can define macros for often used SQL statements. Once defined, you only need to enter the defined macro name and the underlying SQL statement will be executed.
There are two ways to define a SQL macro.
If the current statement in the editor should be defined as a macro, select (highlight) the statement's text and select
→ from the main menu. You will be prompted to supply a name for the new macro. If you supply the name of an existing macro, the existing macro will be overwritten.Alternatively you can add a new macro through
→ . This dialog can also be used to delete and and edit existing macros.Once a macro is defined, you can execute it by simply typing the macro's name in the editor and execute it like any other SQL command.
To execute a macro, you can either type the alias you have defined, or select the macro from the
menu. The first 10 macros will be listed there directly. To view the complete list of macros select → After selecting a macro, it can be executed by clicking on the Run button. If you check the option "Replace current SQL", then the text in the editor will be replaced with the text from the macro when you click on the run button.![]() | |
Macros will no be evaluated when running in batch mode! |
Apart from the SQL Workbench/J script variables for SQL Statements, additional "parameters" can be used inside a macro definition. These parameters will be replaced before replacing the script variables.
The SQL statement that is eventually executed will be logged into the message panel when invoking the macro from the menu. Macros that use the above paramters cannot correctly be executed by entering the macro alias in the SQL editor (and then executing the "statement").
![]() | |
The parameter keywords are case sensitiv, i.e.
the text |
This feature can be used to create SQL scripts that work only with with an additional statement. e.g. for Oracle you could define a macro to run an explain plan for the current statement:
EXPLAIN PLAN FOR ${current_statement}$ ; COMMIT; SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY);
When you run this macro, it will run an EXPLAIN PLAN
for the statement in which the cursor is currently located, and will
immediately display the results for the explain. Note that the
${current_statement}$
keyword is terminated with
a semicolon, as the replacement for ${current_statement}$
will never add the semicolon. If you use ${selection}$
instead, you have to pay attention to not select the semicolon in the
editor before running this macro.
For PostgreSQL you can define a similar macro that will automatically run
the EXPLAIN
command for a statemet:
explain ${current_statement}$
Another usage of the parameter replacement could be a SQL Statement that retrieves the rowcount that would be returned by the current statement:
SELECT count(*) FROM ( ${current_statement}$ )
The complete history for all editor tabs is saved and loaded into one file, called a workspace. These workspaces can be saved and loaded to restore a specific editing context. You can assign a saved workspace to a connection profile. When the connection is established, the workspace is loaded into SQL Workbench/J. Using this feature you can maintain a completely different set of statements for different connections.
If you do not assign a workspace to a connection profile, a workspace with the
name Default.wksp
will be used for storing the statement history.
This default workspace is shared between all profiles
that have no workspace assigned.
To save the current SQL statement history and the visible tabs into a new workspace, select
→
The default file extension for workspaces is wksp
.
Once you have loaded a workspace, you can save it with
→ . The current workspace is automatically saved, when you exit SQL Workbench/J.An existing workspace can be loaded with
→If you have an external file open in one of the editor tabs, the filename itself will be stored in workspace. When loading the workspace SQL Workbench/J will try to load the external file again. If the file does not exist, the last history entry from the saved history for that tab will be displayed.
The workspace file itself is a normal ZIP file, which contains one file with the statement history for each tab. The individual files can be extracted from the workspace using your favorite UNZIP tool.
The text from the current editor can be saved to an external file, by choosing (Ctrl-F4) or use the context menu on the tab label itself.
→ or → . The filename for the current editor will be remembered. To close the current file, select →![]() | |
Detaching a file from the editor will remove the text from editor as well. If you only want to detach the filename from the editor but keep the text, then press Ctrl-Shift-F4 or hold down the Shift key while selecting the Discard menu item. |
When you load a SQL script and execute the statements, be aware that due to the history management in SQL Workbench/J the content of the external file will be placed into the history buffer. If you load large files, this might lead to massive memory consumption. Currently only the number of statements put into the history can be controlled, but not the total size of the history itself. This might change with a later version.
For MS SQL Server, any message written with the PRINT
command will be displayed in the Messages
tab after the
SQL command has finished. The PRINT
command is usually
used in stored procedures for logging purposes, but it can also be used
as a command on its own:
PRINT "Deleting records..."; DELETE from my_table WHERE value = 42; PRINT "Done."
This will execute the DELETE
. Once this script has
finished, the Messages
tab will contain the text:
Deleting records... Done.
For Oracle the DBMS_OUTPUT
package is supported. Support for this
package can be turned on with the ENABLEOUT command.
If this support is not turned on, the messages will not be displayed. This is the same
as using the SET SERVEROUTPUT ON
command in SQL*Plus.
If you want to turn on support for DBMS_OUTPUT
automatically when
connecting to an Oracle database, you can put the ENABLEOUT
command
into the pre-connect script.
Any message "printed" with DBMS_OUTPUT.put_line()
will
be displayed in the message part after the SQL command has finished. Please
refer to the Oracle documentation if you want to learn more about the
DBMS_OUTPUT
package.
dbms_output.put_line("The answer is 42");
Once the command has finished, the following will be displayed in the
Messages
tab.
The answer is 42
Postgres supports a similar mechanism. Any text returned by a function or
stored procedure with the RAISE
keyword, will be displayed in the message tab as well
If your DBMS supports something similar, please let me know. I will try to implement it - provided I have free access to the DBMS. Please send your request to support@sql-workbench.net
Once the data has been retrieved from the database, it can be edited directly in the result set. SQL Workbench/J assumes that enough columns have been retrieved from the table so that at a unique identifier is available to identify the rows to be updated.
If you have primary keys defined for the underlying tables, those primary key columns will be used for the
WHERE
statements for UPDATE
and DELETE
.
If no primary key columns are found, the JDBC driver is asked for a best row identifier. If
that doesn't return any information, your defined PK Mapping will
be queried. If still no PK columns can be found, you will be prompted to select
the key columns based on the current result set.
The changes (modified rows, new rows or deleted rows) will not be saved to the database until
you choose COMMIT
will be
sent to the database automatically (if needed).
If your SELECT
was based
on more than one table, you will be prompted to specify which table should be
updated. Only columns for the chosen table will be included in the UPDATE
or
INSERT
statements.
If no primary key can be found for the update table, you will be prompted to select the
columns that should be used to uniquel identify a row in the update table.
If an error is reported during the
update, a ROLLBACK
will be sent to the database. The COMMIT
or ROLLBACK
will only be sent if autocommit is turned off.
Columns containing BLOB data will be displayed with a BLOB support for details.
button. By clicking on that button, you can view the blob data, save it to a file or upload the content of a file to the DBMS. Please refer to
When editing, SQL Workbench/J will highlight columns that are defined as NOT NULL
in the database. You can turn this feature off, or change the color that is used in the
options dialog.
![]() | |
When editing date, timestamp or time fields, the format specified in the options dialog is used for parsing the entered value and converting that into the internal representation of a date. The value entered must match the format defined there. |
If you want to input the current date and time you can use now, today, sysdate,
current_timestamp, current_date
instead. This will then use the current
date & time and will convert this to the approriate data type for that column.
e.g. now
will be converted to the current time for a time column,
the current date for a date column and the current date/time for a timestamp column.
These keywords also work when importing text files using WbImport
or importing a text file into the result set. The exact keywords that are recognized can be
configure in the settings file
To delete a row from the result, select
→ from the menu. This will remove the currently selected row(s) from the result and will mark them for deletion once the changes are saved. No foreign key checks will be done when using this option.
The generated DELETE
statements will fail if the deleted row(s) are still
referenced by another table. In that case, you can use .
To delete rows including all dependent rows, choose
DELETE
statements to delete the dependent rows, before
sending the DELETE
for the selected row(s).
might take some time to detect all foreign key dependencies for the current update table. During this time a message will be displayed in the status bar. The selected row(s) will not be removed from the result set until the dependency check has finished.
![]() | |
Note that the generated SQL statements to delete the dependent rows will only be shown if you have enabled the preview of generated DML statements in the options dialog |
You can also generate a script to delete the selected and all depending rows through
→ . This will not remove any rows from the current result set, but instead create and display a script that you can run at a later time.Once you have retrieved data from a table that has foreign key relations to other tables, you can navigate the relationship for specific rows in the result set. Select the rows for which you want to find the data in the related tables, then right click inside the result set. In the context menu two items are available:
Referenced rows |
Referencing rows |
Consider the following tables:
BASE (b_id, name) |
DETAIL (d_id, base_id, description) with base_id
referencing BASE(b_id) |
MORE_DETAIL (md_id, detail_id, description) with detail_id
referencing DETAIL (d_id) |
The context menu for the selected rows will give you the choice in
which SQL tab you want the generated SELECT
to be pasted. This is similar
to the Put SELECT into feature in the table list
of the DbExplorer.
Once you have obtained a result set from the table BASE
, select
(mark) the rows for which you want to retrieve the related rows, e.g. the one where id=1. Using
→
SQL Workbench/J will create the following statement:
SELECT * FROM DETAIL WHERE base_id = 1;The result of the generated statement will always be added to the existing results of the chosen SQL panel. By default the generated SQL statement will be appended to the text editor. If you don't want the generated statement to be appended to the editor, hold down the
Ctrl
key while selecting the desired menu item.
In that case, the generated statement will only be written to the messages
panel of the SQL tab. If the target tab contains an external file, the statement will never
be appended to the editor's text.
To navigate from the child data to the "parent" data, use
The additional result tabs can be closed using
→
The result will be displayed in the order returned by the DBMS (i.e.
if you use an ORDER BY
in your SELECT
the display will be displayed as sorted by the DBMS).
You can change the sorting of the displayed data by clicking on the header
of the column that should be used for sorting. Initially the data will be
sorted ascending (lower values at the top). If you click on the column again
the sort order will be reversed. The sort order will be indicated by a little
triangle in the column header. If the triangle points upward the data is sorted
ascending, if it points downward the data is sorted descending.
Clicking on a column will remove any previous sorting (including the secondary
columns) and apply the new sorting.
If you want to sort by more than one column, hold down the Ctrl key will clicking on the (second) header. The initial sort order is ascending for that column. To change the sort order hold down the Ctrl key and click on the column header again. The sort order for all "secondary" sort columns will be indicated with a slightly smaller triangle than the one for the primary sort column.
To define a different secondary sort column, you first have to remove the current secondary column. This can be done by holding down the Shift key and clicking on the secondary column again. Note that the data will not be resorted. Once you have removed the secondary column, you can define a different secondary sort column.
Once the data has been retrieved from the Server it can be filtered with the need to re-retrieve the data. You can define the filter in two ways: either enter column and their filter values manually, or create a filter from the currently selected values in the result set.
To define a filter, click on the Filter
button ()
in the toolbar or select → .
A dialog will appear where you can define a filter for the current result set. Each line
in the filter dialog defines an expression that will be applied to the column selected
in the first dropdown. If you select
*
for the column, the filter
condition will be applied to all columns of the result set.
To add a multi-column expression, press the Remove
() button.
For character based column data, you can select to ignore the case of the column's data
when applying the expression, i.e. when
Ignore case
is selected, the
expression 'NAME = arthur'
will match the column value 'Arthur
',
and 'ARTHUR
'.
By default, the column expressions are combined with an OR
, i.e.
that a row will be displayed if at least one of the column expressions evaluates
to true. If you want to view only rows where all
column expressions must match, select the AND
radio button
at the top of the dialog.
Once you have saved a filter to an external file, this filter will be available in the pick list, next to the filter icon. The list will show the last filters that were saved. The number of items displayed in this drop down can be controlled in the settings file.
You can also quickly filter the data based on the value(s) of the currenlty
selected column(s). To apply the filter, select the column values by which
you want to filter then click on the Quickfilter
button
()
in the toolbar or select
→
from the menu bar.
Using the Alt key you can select individual columns of one or more rows. Together with the Ctrl key you can select e.g. the first, third and fourth column. You can also select the e.g. second column of the first, second and fifth row.
Whether the quick filter is available depends on the selected rows and columns. It will be enabled when:
If only a single row is selected, the quick filter will use
the values of the selected columns combined with AND
to
define the filter (e.g. username = 'Bob' AND job = 'Clerk'). Which columns
are used depends on the way you select the row and columns.
If the whole row in the result is selected, the quick filter will use the
value of the focused column (the one with the yellow rectangle), otherwise
the individually selected columns will be used.
If you select a single column in multiple rows, this will
create a filter for that column, but with the values will be combined with
OR
(e.g. name = 'Dent' OR name = 'Prefect').
The quick filter will not be available if you select more than one column in
multiple rows.
Once you have applied a quick filter, you can use the regular filter definition dialog to check the definition of the filter or to further modify it.
You can export the data of the result set into local files of the following formats:
To save the data from the current result set into an external file, choose
→ You will be prompted for the filename. On the right side of the file dialog you will have the possibility to define the type of the export. The export parameters on the right side of the dialog are split into two parts. The upper part defines parameters that are available for all export types. These are the encoding for the file, the format for date and date/time data and the columns that should be exported.All format specific options that are available in the lower part, are also available when using the WbExport command. For a detailed discussion of the individual options please refer to that section.
The options SQL UPDATE
and SQL DELETE/INSERT
are only available when the current result has a single table that can
be updated, and the primary key columns for that table could be retrieved.
If the current result does not have key columns defined, you can select
the key columns that should be used when creating the file. If the current
result is retrieved from multiple tables, you have to supply a table name
to be used for the SQL statements.
Please keep in mind that exporting the data from the result set requires you to load everything into memory. If you need to export data sets which are too big to fit into memory, you should use the WbExport command to either create SQL scripts or to save the data as text or XML files that can be imported into the database using the WbImport command. You can also use → to export the result of the currently selected SQL statement.
You can also copy the data from the result into the system clipboard in four different formats. In any case default settings are used for the various options of the respective format.
Text (tab separated)
This will use a tab as the column separator, and will not quote any values. The end-of-line sequence will be a newline (Unix style) and the column headers will be part of the copied data. Special characters (e.g. newlines) in the actual data will not be replaced (as it is possible with the WbExport command).
When you hold down the Shift key when you select the menu item, the column headers will not be copied to the clipboard. When you hold down the Ctrl key when selecting the menu item, you can choose which columns should be copied to the clipboard. Pressing Shift and Ctrl together is also supported.
SQL (INSERT, UPDATE, or DELETE & INSERT)
The end-of-line sequence will be a newline (Unix style). No cleanup of data will be done as it is possible with the WbExport command, apart from correctly quoting single quotes inside the values (which is required to generate valid SQL)
As with the Save Data as
command, the options SQL UPDATE
and SQL DELETE/INSERT
are only available when the current result set is
updateable. If no key columns could be retrieved for the current result, you can manually
define the key columns to be used, using →
![]() | |
If you do not want to copy all columns to the clipboard, hold down the the CTRL key while selecting one of the menu items related to the clipboard. A dialog will then let you select the columns that you want to copy. |
Alternatively you can hold down the Alt key while
selecting rows/columns in the result set. This will allow you to
select only the columns and rows that you want to copy. If you then use
one of the formats available in the
submenu, only the selected cells will be copied. If you choose to
copy the data as UPDATE
or DELETE/INSERT
statements, the generated SQL statements will not be correct if you did not
select the primary key of the underlying update table.
SQL Workbench/J can import tab separated text files into the current
result set. This means, that you need to issue the approriate SELECT
statement first. The structure of the file has to match the structure of the result set,
otherwise an error will occur. To initiate the import select
→
When selecting the file, you can change some parameters for the import:
Option | Description |
---|---|
Header | if this option this is checked, the first line of the import file will be ignored |
Delimiter | the delimiter used to separate column values. Enter \t for the tab character |
Date Format | The format in which date fields are specified. |
Decimal char | The character that is used to indicate the decimals in numeric values (typically a dot or a comma) |
Quote char | The character used to quote values with special characters. Make sure that each opening quote is followed by a closing quote in your text file. |
You can also import text and XML files using the
WbImport
command. Using the WbImport
command is the recommended way to import
data, as it is much more flexible, and - more important - it does not read the
data into memory.
You can import the contents of the clipboard into the current result, if the format matches the result set. When you select
→ SQL Workbench/J will check if the current clipboard contents can be imported into the current result. The data can automatically be imported if the first row of the data contains the column names. One of the following two conditions must be true in order for the import to succeed