We are facing an issue while copying data from Oracle to Azure SQL using ADF Copy Activity.

Anandhswarupp 205 Reputation points
2025-11-13T05:27:27.8133333+00:00

We are facing an issue while copying data from Oracle to Azure SQL using ADF Copy Activity. The pipeline fails or produces incorrect numeric values due to high-precision decimals in Oracle.

Details:

Source (Oracle): TABLE_FIN

Target (Azure SQL): TABLE_FIN_TEST

Oracle column types: NUMBER (in adf mapping it convert decimal 38,18 places)

Azure column types: NUMERIC(38,18)

Error encountered:

Operation on target TABLE_FIN failed:

ErrorCode=TypeConversionFailure,

Exception occurred when converting value '37.33333333333333333333333333333333333333'

for column name 'RWEEK' from type 'Decimal' (precision:256, scale:130)

to type 'Decimal' (precision:256, scale:130).

Additional info: Specified cast is not valid.

What we tried:

Created a new Azure object NNTABLE_FIN_TEST with NUMERIC(38,18) for numeric columns.

Used a ROUND(column, 18) option in the Oracle source query to limit decimal places and make the pipeline secure:

SELECT

EMPID,

EMPNAME,

DESIGNATION,

LOCATION,

ROUND(HOURSPERWEEK, 18) AS HOURSPERWEEK,

ROUND(TOTALHOURS, 18) AS TOTALHOURS,

ROUND(RATE, 18) AS RATE,

ROUND(COST, 18) AS COST,

ROUND(WEEK, 18) AS WEEK,

MANAGER,

DEPARTMENT

FROM CP_STAFFING_TABLE_FIN;

result: Pipeline runs successfully, but some numeric values differ from the Oracle source. For example:

oracle ValueAzure Result3.44444445556666666666688883.00000000000000000040.2222222222222222222240.000000000000000000

row counts are also slightly different in some cases.

request for Support:

confirm if there is any supported way to preserve all Oracle decimal precision when writing to Azure NUMERIC(38,18) via ADF.

advise on best practices to avoid data mismatch while copying high-precision numeric values.

Confirm whether any ADF connector or mapping configuration can help prevent rounding/truncation.

Wwe are in production, and preserving numeric accuracy is critical. We applied the ROUND(...,18) in the source query for a secure copy, but we still see mismatched values. Please advise the safest approach.

Additional Info:

source table: TABLE_FIN

target table: TABLE_FIN_TEST

pipeline: ADF Copy Activity

take action asap

Azure Data Factory
Azure Data Factory
An Azure service for ingesting, preparing, and transforming data at scale.
{count} votes

2 answers

Sort by: Most helpful
  1. Q&A Assist
    2025-11-13T05:27:54.2233333+00:00
    AI generated content. This question contains an answer created with AI from Q&A Assist. Learn more

    To address the issue of high-precision decimals when copying data from Oracle to Azure SQL using Azure Data Factory (ADF), consider the following recommendations:

    1. Data Type Mapping: Ensure that the data types in your Azure SQL target table are appropriately defined to accommodate the precision of the Oracle source. Since you are using NUMERIC(38,18) in Azure SQL, it should be able to handle values with up to 18 decimal places. However, Oracle's NUMBER type can have a precision that exceeds this, leading to potential truncation or rounding issues.
    2. Use of ROUND Function: While you have already implemented the ROUND(column, 18) function in your source query, it is crucial to ensure that this function is applied consistently across all relevant numeric columns. This should help in limiting the precision to what Azure SQL can handle. However, be aware that rounding may still lead to differences in values, especially if the original values have more precision than what is being retained.
    3. Consider Data Type Conversion: If the precision of the data in Oracle exceeds what Azure SQL can handle, consider converting the data types in Oracle to a more compatible format before the copy operation. For example, you might use CAST or CONVERT to ensure that the data fits within the limits of NUMERIC(38,18).
    4. Testing with Sample Data: Before executing the full pipeline, test the copy activity with a smaller dataset to verify that the values are being transferred accurately. This will help identify any potential issues without affecting the entire production dataset.
    5. Review ADF Connector Settings: Check if there are any specific settings in the ADF Oracle connector that could be adjusted to better handle high-precision decimals. While the context does not specify particular configurations for preventing rounding or truncation, ensuring that you are using the latest version of the connector may help, as updates often include performance and compatibility improvements.
    6. Consult Documentation and Support: If issues persist, consider consulting the Azure Data Factory documentation for any additional settings or best practices related to handling numeric data types. Engaging with Microsoft support may also provide insights specific to your use case and configuration.

    By following these steps, you should be able to minimize discrepancies in numeric values when copying data from Oracle to Azure SQL.


    References:

    0 comments No comments

  2. Pratyush Vashistha 5,045 Reputation points Microsoft External Staff Moderator
    2025-11-19T08:39:07.1233333+00:00

    Hello Anandhswarupp,

    Thank you for reaching out on the Microsoft Q&A portal and for providing such a detailed description of the issue. I understand that you're facing a critical data precision issue when copying high-precision numeric data from Oracle to Azure SQL. It's certainly a common challenge, but we can definitely work through it.

    The error TypeConversionFailure and the subsequent data mismatch, even after using ROUND, point to an issue where the data type is being misinterpreted at some point during the data movement process. The Oracle NUMBER type can have a floating precision that doesn't always map perfectly to the fixed NUMERIC(38, 18) in Azure SQL, especially as it passes through the Azure Data Factory (ADF) integration runtime.

    When ADF reads the data, its internal engine infers a very high-precision data type (as seen in your error message), which then fails to convert to the target's fixed precision and scale. Your ROUND function was a good idea, but the incorrect results suggest that a loss of precision is happening after the query is executed, likely due to an implicit cast to a less precise type like a float or double within the ADF pipeline before it reaches the sink.

    To ensure a successful and accurate data copy, we need to be more explicit about the data type before it even leaves the Oracle source.

    Explicit Casting in the Source Query

    Instead of relying on ADF to correctly interpret the rounded number, you should explicitly cast the column to the exact precision and scale that matches your Azure SQL target table.

    Modify your source query in the ADF Copy Activity to use the CAST function. This forces Oracle to format the data correctly before sending it to ADF.

    1. Update the Source Query in ADF
    • Go to your ADF Copy Activity settings.
      • In the "Source" tab, update your query to cast each numeric column.
    SELECT
        EMPID,
        EMPNAME,
        DESIGNATION,
        LOCATION,
        CAST(HOURSPERWEEK AS NUMBER(38, 18)) AS HOURSPERWEEK,
        CAST(TOTALHOURS AS NUMBER(38, 18)) AS TOTALHOURS,
        CAST(RATE AS NUMBER(38, 18)) AS RATE,
        CAST(COST AS NUMBER(38, 18)) AS COST,
        CAST(WEEK AS NUMBER(38, 18)) AS WEEK,
        MANAGER,
        DEPARTMENT
    FROM TABLE_FIN
    
    • In the Mapping tab of your Copy Activity, ensure that the data types are correctly inferred. After changing the source query with the CAST function, you can click "Import schemas" again. The source type should now be recognized with the correct precision, and it should map directly to the Decimal type with a precision of 38 and a scale of 18 on the sink side.
    • If you are using a staging account or PolyBase for this copy operation, be aware that these intermediate steps have their own data type conversion rules that can sometimes cause issues. The explicit CAST in the source query is the best way to mitigate this, as it prepares the data correctly before it ever hits the staging environment. For a quick test, you could try running the copy without staging to see if the behavior changes, which would confirm if the issue lies with the staging process.

    I am confident that implementing the CAST function in your source query will resolve both the pipeline failure and the data mismatch issues you are observing.

    Thank you again for asking your question. Please let us know if this solution works for you or if you have any further questions!

    Helpful References:

    Please "Accept as Answer" or click 'Thumbs Up YES' if the answer provided is useful, so that you can help others in the community looking for remediation for similar issues.

    Thanks

    Pratyush

    User's image


Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.