Export Table to SQL for Paradox: A Step-by-Step GuideParadox is a legacy desktop database format that was popular in the 1990s and early 2000s. Although it’s less common today, many organizations still have valuable data stored in Paradox (.DB) files that needs to be migrated into modern relational databases such as MySQL, PostgreSQL, Microsoft SQL Server, or SQLite. This guide walks through the full process of exporting Paradox tables to SQL — from preparing the source files and choosing a target database, to converting data types, handling special cases, and validating the imported data.
Who this guide is for
- Developers and DBAs tasked with migrating legacy Paradox data to modern SQL databases.
- Analysts and archivists who need to extract data for reporting or preservation.
- Anyone working with Paradox files who wants a reliable, repeatable conversion process.
Overview of the process
- Inspect and prepare Paradox files.
- Choose a target SQL database and tools.
- Extract schema (table structure) from Paradox.
- Map Paradox data types to SQL data types.
- Export data from Paradox into an exchange format (CSV, SQL, or others).
- Import data into the target SQL database.
- Validate and troubleshoot.
1. Inspect and prepare Paradox files
- Locate .DB, .PX, .MB, and .VAL files. A Paradox table typically includes the .DB (data) file and .PX (primary index) file; others (.MB for memo, .VAL for validation rules) may be present.
- Back up all files before beginning. Work on copies to avoid accidental corruption.
- Check file encodings and language-specific code pages (some Paradox files use DOS/Windows code pages). Mismatched encoding will produce garbled text after export.
Tips:
- If you only have .DB files, you may lose some metadata (indexes, constraints). If .MB/.VAL are missing, memo fields or validation rules won’t be recoverable.
- If files were used in a Windows environment and you’re on Linux/macOS, ensure you transfer them with binary mode to preserve content.
2. Choose a target SQL database and tools
Common targets:
- MySQL / MariaDB — good for web apps and general-purpose storage.
- PostgreSQL — recommended for advanced SQL features and stricter data types.
- Microsoft SQL Server — suitable for Windows environments and enterprise deployments.
- SQLite — lightweight local storage and testing.
Tool options:
- Commercial/paid converters (e.g., Full Convert, ESF Database Migration Toolkit) — often easiest, with GUI and direct DB-to-DB transfers.
- Open-source tools and libraries — ODBC drivers for Paradox, Python libraries, or command-line utilities.
- Manual method — export to CSV from a Paradox-capable reader and import CSV into the SQL database.
Recommended approach for reliability: use a tool that can read Paradox natively (ODBC/driver or conversion software), export schema and data, and create appropriate CREATE TABLE and INSERT statements.
3. Extract schema from Paradox
Paradox does not always store schema in the same way modern RDBMS do. To rebuild schema:
A. Use a Paradox reader or ODBC driver
- Install a Paradox ODBC/ODBC-JDBC driver (e.g., Microsoft Jet driver historically supported Paradox, or third-party drivers on 64-bit systems).
- Connect from your DB client or scripting language (Python, ODBC tools) and query metadata to list fields, types, sizes, indexes.
B. Use conversion tools that generate SQL
- Tools like Full Convert can generate CREATE TABLE statements for the target DB automatically. This saves time and avoids manual mapping errors.
C. Manual inspection
- If tools aren’t available, open the table in a Paradox viewer (e.g., older Paradox application, DBF viewers that support .DB), note field names, field types, lengths, and any constraints.
Important schema elements to capture:
- Field name, data type, size/precision
- Primary key(s) / unique constraints
- Indexes and sort order
- Memo/blobs and how they’re stored (.MB files)
4. Map Paradox data types to SQL data types
Paradox uses data types such as Alpha, Numeric, Date, Time, Currency, Logical (boolean), Memo, and Binary. Mapping depends on the target DB:
Example mappings:
- Alpha (text) → VARCHAR(n) or TEXT (PostgreSQL: VARCHAR(n) or TEXT; MySQL: VARCHAR(n)/TEXT).
- Numeric (integer) → INTEGER / INT.
- Numeric (decimal) → DECIMAL(precision, scale) or NUMERIC.
- Date → DATE.
- Time → TIME or DATETIME (if date and time combined).
- Currency → DECIMAL(18,4) or specific money type (SQL Server: MONEY).
- Logical → BOOLEAN or TINYINT(1).
- Memo → TEXT / BYTEA (Postgres) / BLOB (MySQL for binary) or TEXT for long text.
- Binary → BLOB / BYTEA.
Notes:
- Choose field lengths conservatively after sampling data. Overly small VARCHAR lengths cause truncation; overly large ones waste space but are safer for migration.
- Paradox numeric fields may combine integer and decimal formats—inspect sample values to choose precision and scale.
5. Export data from Paradox
Three main export methods:
A. Direct DB-to-DB transfer via a converter
- Many commercial converters support direct connection to Paradox and target DBs. They export schema and data automatically.
B. Using ODBC / OLE DB
- Install a Paradox ODBC driver and connect from a scripting language (Python with pyodbc, R, or another tool). Use SELECT queries to fetch data and write INSERTs or bulk import files.
- Example workflow in Python:
- Connect to Paradox ODBC.
- Query table metadata and rows.
- Write out CSV or generate parametrized INSERT statements for the target DB client library.
C. Export to CSV and import
- Open Paradox in a viewer/editor and export each table as CSV (taking care of delimiter, quoting, and encoding).
- Import CSV into target DB using bulk loaders (MySQL LOAD DATA INFILE, PostgreSQL COPY, or SQL Server BULK INSERT). For CSV:
- Ensure proper quoting of fields with commas/newlines.
- Preserve encoding (UTF-8 recommended) or convert as needed.
- Handle NULLs explicitly (Paradox “empty” fields may need translating to SQL NULL).
Handling memo and binary fields:
- Memo fields may be separate files (.MB). Some tools export memo contents inline (as long text) or as separate files; you’ll need to re-associate them with rows during import.
- Binary blobs may require base64 encoding for safe CSV transport, or use direct BLOB support via a DB API.
Generating SQL INSERTs:
- If you choose to generate INSERT statements, escape single quotes and special characters in text fields. Use parametrized statements from client libraries when possible to avoid SQL injection and quoting errors.
6. Import into the target SQL database
A. Create tables
- Use the CREATE TABLE statements generated earlier (or adjust them manually after reviewing data types). Include primary keys and NOT NULL constraints carefully — consider importing data first and adding constraints afterward to avoid errors from legacy data that violates constraints.
B. Use bulk import tools
- PostgreSQL: COPY table FROM ‘file.csv’ WITH (FORMAT csv, HEADER true, ENCODING ‘UTF8’);
- MySQL: LOAD DATA INFILE ‘file.csv’ INTO TABLE table_name FIELDS TERMINATED BY ‘,’ ENCLOSED BY ‘“’ LINES TERMINATED BY ‘ ’ (handle local vs server file issues).
- SQL Server: BULK INSERT or bcp.
C. Use parametrized inserts when importing via a script
- For smaller tables or complex data conversions, script the inserts using the DB client library and parameterized queries.
D. Recreate indexes and constraints
- After data import, create indexes and foreign key constraints. Creating them after import is usually faster and avoids failures due to missing referenced rows.
E. Handle encoding and locale
- Ensure the database, table, and connection use the intended character set (UTF-8 recommended). If Paradox used a legacy code page (e.g., Windows-1251 for Cyrillic), convert text to UTF-8 during import.
7. Validate and troubleshoot
Validation steps:
- Row counts: compare source Paradox row counts to target DB row counts.
- Sample records: spot-check text fields, dates, numeric precision, and special characters.
- Checksums: compute checksums or hashes on important columns to ensure data integrity.
- Null vs empty strings: verify how empty values were translated.
Common issues and fixes:
- Garbled characters: caused by incorrect encoding — re-export or convert using the correct code page → UTF-8.
- Truncated text: increase VARCHAR length or use TEXT. Re-import after schema adjustment.
- Failed imports due to constraint violations: import without constraints, clean data, then add constraints.
- Memo fields missing: ensure .MB files are included in the export or use a tool that consolidates memo text.
Example: Simple Python workflow (Paradox → CSV → PostgreSQL)
-
Use an ODBC driver or tool to open the Paradox table and write out CSV with UTF-8 encoding. Ensure the CSV includes a header row.
-
On PostgreSQL server, create a table with appropriate types:
CREATE TABLE customers ( id SERIAL PRIMARY KEY, name TEXT, email TEXT, balance NUMERIC(12,2), signup_date DATE, notes TEXT );
- Use COPY to import:
COPY customers(name, email, balance, signup_date, notes) FROM '/path/to/customers.csv' WITH (FORMAT csv, HEADER true, ENCODING 'UTF8');
- Verify:
SELECT count(*) FROM customers; SELECT * FROM customers LIMIT 5;
8. Best practices and tips
- Always work on copies of Paradox files.
- Start with a small test table to verify the workflow before migrating everything.
- Keep a mapping document: Paradox field → SQL field/type → any transformation applied.
- Migrate indexes and constraints after data load to speed up import.
- Log all steps and errors; keep snapshots/backups at major stages.
- Consider archiving original Paradox files (read-only) for future audits.
9. When to use a professional tool or service
If your migration involves many tables, complex relationships, heavy use of memo/binary fields, or you need to preserve indexes/validation logic, using a commercial migration tool or hiring a migration specialist can save time and reduce risk. These tools typically handle metadata, indexes, memo fields, and direct DB-to-DB transfers more reliably than ad-hoc exports.
10. Summary checklist
- [ ] Back up Paradox files (.DB, .PX, .MB, .VAL).
- [ ] Choose target DB and conversion method.
- [ ] Extract schema and map data types.
- [ ] Export data (ODBC / CSV / converter).
- [ ] Create target tables and import data.
- [ ] Recreate indexes and constraints.
- [ ] Validate data and address encoding/format issues.
- [ ] Archive original files and document the process.
If you want, I can:
- Provide a ready-made CREATE TABLE mapping for a specific Paradox table if you paste its field list and sample values.
- Produce a Python script (pyodbc + psycopg2 or similar) that reads via Paradox ODBC and writes to PostgreSQL.