Home Admin Corner

Big News: we’ve moved to the DataRobot Community! Please keep your eye out for an email invitation to join us there. Refer to the We've Moved FAQ for a guide on how to use your existing Paxata Community account to login to our new home.

Visit the official Paxata Documentation portal for all of your doc needs.

Settings for defining a connector

jmayhewjmayhew Posts: 28 admin
edited December 20, 2018 12:46AM in Admin Corner
This discussion allows you to configure a Connector for a specific remote Data Store. This Connector Configuration will then be available in the Data Library for setting up Data Sources for import and export. The common configuration parameters for all Connectors are described below. For Connector–specific configuration parameters, mouse over the fields in the form. Important: for any optional fields that you leave blank here, be aware that settings applied at the Data Source or Session level take precedence. If you return later to this form and update an optional field that you initially left blank, the settings you apply here will not override the settings already saved at the Data Source or Session level for this connector.
Connection UI Connection Settings

Allows you to connect to a JDBC source for Data Library imports. The following fields are used to define the connection parameters.

Name and Description: the name and description users will see for this Connector Configuration.

Database URI
JDBC URL: the JDBC connection string. The schema name can be included in the URL.

Database Visibility
You can control the database, schemas, and tables that are shown to users when they browse a data source during import. For databases, schemas, and tables, you can choose to:

  • "Show only" which returns only the databases, schemas or tables that you specify here.
  • "Hide" which hides the databases, schemas and tables that you specify here.
  • "Show all" which is the default setting to display everything in the data source.

When you select the "Show only" or "Hide" options, a field is provided for specifying the databases, schemas or tables on which you want the option enforced.

Important: These settings are not enforced when users query against the data source; query results still return a complete list of matches. For example, if you choose to "hide" a specific database, the query results will still include that database in the returned results. However, that database will not be displayed to users when they browse the data source.

Import Configuration

Import Configuration
Query Prefetch Size for queries: The batch size used (in rows) when prefetching data on import.

Max Column Size: The maximum size in Unicode characters allowed in a cell when importing or exporting data.

Pre-import SQL: SQL statement to execute before beginning import, after the table’s schema is determined.

Post-import SQL: SQL statement to execute after import completes.

Export Configuration

Export Batch Size: the batch size used when exporting data.

Max VARCHAR Size: the maximum size allowed for a VARCHAR column allowed on export. Columns greater than this size will be created as CLOB columns in the database.

Automatically create table: create a new table automatically on export.

  • If checked, and no table exists - a new table will be created and the data will be populated
  • If unchecked, and the table exists - the records in the existing table will be left intact, and the new records will be added
  • If checked and a table exists - the table is dropped, recreated, and the records will be added to the table
  • If unchecked and there is no table - you will get an error
You have to decide if you want to build a table or not (and make sure the account used in the credentials has the rights to create table, or that specific privilege.

Now - if you want to retain some of the records or the structure, then I would suggest running the pre-export & post-export SQL.  Pre-export would delete what you want to remove, but retain the database structure.  A post command can be issued as a follow up to do activities like rebuild an index or things of that manner.

Pre-export SQL: SQL statement to execute before beginning export, after the table is created if auto-create is enabled.

Post-export SQL: SQL statement to execute after export completes.


The user authentication can be done through a Shared Account or an Individual Account. If you chose to authenticate with an individual account, the user will be prompted to enter a username and password to access this Data Source. If you choose to authenticate with a Shared Account, the following fields are required.

User: the username for a shared account used to authenticate to the database.

Password: the password for a shared account used to authenticate to the database.

Role: if roles have been implemented for this database, authenticated users with this user role can perform queries after authentication.


  • HI, We are struggling to load the data in teradata table through PAXATA. Could you please guide me to get done?
  • The connection is fine however, getting the error while exporting...
  • Hi Mahendra - can you share the error message?  If you don't want to share here then please email error to [email protected] and copy me ([email protected]) and we will pick up troubleshooting from there.  Thank you!
  • Hi,
      Thanks for your reply. We are getting the below error however, the connection is success when test data source.


    Export Issues
    Paxata cannot create a table at this level within the database hierarchy.
    Time StartedMar 5, 2019 3:17 PM
    Time FinishedMar 5, 2019 3:17 PM
    Exported Location/
    Exported By1442272

    Source Dataset

    Dataset NameMahi_RDM_Replace_tg
    DescriptionAnswerSet for project "Mahi_RDM_Replace" version 6
    Num Rows185

    Thank you
  • AkshayAkshay Posts: 111 admin
    Hello Mahendra, 

    Right now your exported location is just a "/", you need to pick an appropriate export path so that Paxata can create a table on that level within the database hierarchy.

    Thank You,
  • Hi Akshay,
      Just, It is worked successfully. As you said, the location is picked the right path now and the data also inserted into respective table. Please find the below log. Many thanks for your support


    Export Issues
    Time StartedMar 5, 2019 11:53 PM
    Time FinishedMar 5, 2019 11:53 PM
    Exported ToADJUSTEMENT_V2
    Exported Location/SCS03_CORE_VR
    Exported By1442272

    Source Dataset

    Dataset NameSample
    DescriptionAnswerSet for project "Mahi_RDM_Replace" version 5
    Num Rows185

  • Hi,
    I am facing error when I am trying to create a dataset in paxata using curl command. Error message is as below.

    "ERROR: paxata:connector-jdbc:2018.","ERROR: There is nothing to import. Check that the dataset has columns with data and the columns are not hidden."]

    Can someone guide on this please?
  • bstephensbstephens Posts: 44 mod
    edited October 3, 2019 2:03PM

    Here are 2 examples of using curl and the Datasource Imports REST API for the same table: 

    Import using ItemPath
    • "90546951877d4bb8bb21bcfcfea7d2fc" the datasource id retrieved using the datasource/configs REST API
    • /devdb/Customer:  The path to the table of interest. For my configuration on MySQL, this is /<schema>/<tablename>
    Import using Query
    • query : the URL Encoded form of "select * from Customer".  I used https://www.urlencoder.org/, but you should be able to accomplish this in most programming languages 
    • select%20%2A%20from%20Customer

  • @abhijeetmkhambe

    In your post, you mentioned running a curl command to import a dataset but the error shown reports using a JDBC connector and not a REST API connector:

    "ERROR: paxata:connector-jdbc:2018."

    Could you please reach out to our support directly at [email protected] and we can work out what is happening? I would like to have a look at your CURL command and the datasource set up, please include this in your email to support.

  • Hi bstephens and ebarre,
    I was able to resolve the issue with few trial and errors and now dataset is created successfully.

    I want to know if it possible to create a dataset ix Paxata using Java REST API? If answer is 'Yes' then is there any sample code for the same which can be used as starting point?

  • @abhijeetmkhambe

    Our REST API is not restricted to curl, but can be leveraged by any client/language that supports creation and execution of HTTP requests using Basic Authentication.  

    Here is a simple example of a Java import using org.apache.httpcomponents.httpclient that can be used to programmatically perform the imports that I've shown above using curl. It returns the InputStream from the HTTP Response.  For Imports, the response is a JSON document containing the metadata of the imported dataset. 

    Note: This is quick and dirty to show the basics. I would have far more exception handling and logging for production use.
    public InputStream DatasourceImport(String name, String datasourceId, String itemPath) throws Exception{
    String url = "http://localhost:8080/rest/datasource/imports/" + datasourceId + "?itemPath=" + itemPath + "&name=" + name;

    HttpClient client = HttpClientBuilder.create().build();
    HttpPost postRequest = new HttpPost(url);

    // Add Basic Auth Header
    String auth = "username:password"; // or ":rest_token"
    byte[] encodedAuth = Base64.encodeBase64(auth.getBytes(Charset.forName("UTF-8")));
    String authHeader = "Basic " + new String(encodedAuth);
    postRequest.addHeader(HttpHeaders.AUTHORIZATION, authHeader);

    // execute request
    HttpResponse response = client.execute(postRequest);
    HttpEntity entity = new BufferedHttpEntity(response.getEntity());

    return entity.getContent();

Sign In or Register to comment.