Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
This page documents API and behavior changes that have the potential to break existing applications updating from EF Core 9 to EF Core 10. Make sure to review earlier breaking changes if updating from an earlier version of EF Core:
- Breaking changes in EF Core 9
- Breaking changes in EF Core 8
- Breaking changes in EF Core 7
- Breaking changes in EF Core 6
Summary
Note
If you are using Microsoft.Data.Sqlite, please see the separate section below on Microsoft.Data.Sqlite breaking changes.
Low-impact changes
Application Name is now injected into the connection string
New behavior
When a connection string without an Application Name is passed to EF, EF now inserts an Application Name containing anonymous information about the EF and SqlClient versions being used. In the vast majority of cases, this doesn't impact the application in any way, but can affect behavior in some edge cases. For example, if you connect to the same database with both EF and another non-EF data access technology (e.g. Dapper, ADO.NET), SqlClient will use a different internal connection pool, as EF will now use a different, updated connection string (one where Application Name has been injected). If this sort of mixed access is done within a TransactionScope, this can cause escalation to a distributed transaction where previously none was necessary, due of the usage of two connection strings which SqlClient identifies as two distinct databases.
Mitigations
A mitigation is to simply define an Application Name in your connection string. Once one is defined, EF does not overwrite it and the original connection string is preserved exactly as-is.
SQL Server json data type used by default on Azure SQL and compatibility level 170
Old behavior
Previously, when mapping primitive collections or owned types to JSON in the database, the SQL Server provider stored the JSON data in an nvarchar(max) column:
public class Blog
{
// ...
// Primitive collection, mapped to nvarchar(max) JSON column
public string[] Tags { get; set; }
// Owned entity type mapped to nvarchar(max) JSON column
public List<Post> Posts { get; set; }
}
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
modelBuilder.Entity<Blog>().OwnsMany(b => b.Posts, b => b.ToJson());
}
For the above, EF previously generated the following table:
CREATE TABLE [Blogs] (
...
[Tags] nvarchar(max),
[Posts] nvarchar(max)
);
New behavior
With EF 10, if you configure EF with UseAzureSql (see documentation), or configure EF with a compatibility level of 170 or above (see documentation), EF will map to the new JSON data type instead:
CREATE TABLE [Blogs] (
...
[Tags] json
[Posts] json
);
Although the new JSON data type is the recommended way to store JSON data in SQL Server going forward, there may be some behavioral differences when transitioning from nvarchar(max), and some specific querying forms may not be supported. For example, SQL Server does not support the DISTINCT operator over JSON arrays, and queries attempting to do so will fail.
Note that if you have an existing table and are using UseAzureSql, upgrading to EF 10 will cause a migration to be generated which alters all existing nvarchar(max) JSON columns to json. This alter operation is supported and should get applied seamlessly and without any issues, but is a non-trivial change to your database.
Why
The new JSON data type introduced by SQL Server is a superior, 1st-class way to store and interact with JSON data in the database; it notably brings significant performance improvements (see documentation). All applications using Azure SQL Database or SQL Server 2025 are encouraged to migrate to the new JSON data type.
Mitigations
If you are targeting Azure SQL Database and do not wish to transition to the new JSON data type right away, you can configure EF with a compatibility level lower than 170:
protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
optionsBuilder.UseAzureSql("<connection string>", o => o.UseCompatibilityLevel(160));
}
If you're targeting on-premises SQL Server, the default compatibility level with UseSqlServer is currently 150 (SQL Server 2019), so the JSON data type is not used.
As an alternative, you can explicitly set the column type on specific properties to be nvarchar(max):
public class Blog
{
public string[] Tags { get; set; }
public List<Post> Posts { get; set; }
}
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
modelBuilder.Entity<Blog>().PrimitiveCollection(b => b.Tags).HasColumnType("nvarchar(max)");
modelBuilder.Entity<Blog>().OwnsMany(b => b.Posts, b => b.ToJson().HasColumnType("nvarchar(max)"));
modelBuilder.Entity<Blog>().ComplexProperty(e => e.Posts, b => b.ToJson());
}
Parameterized collections now use multiple parameters by default
Old behavior
In EF Core 9 and earlier, parameterized collections in LINQ queries (such as those used with .Contains()) were translated to SQL using a JSON array parameter by default. Consider the following query:
int[] ids = [1, 2, 3];
var blogs = await context.Blogs.Where(b => ids.Contains(b.Id)).ToListAsync();
On SQL Server, this generated the following SQL:
@__ids_0='[1,2,3]'
SELECT [b].[Id], [b].[Name]
FROM [Blogs] AS [b]
WHERE [b].[Id] IN (
SELECT [i].[value]
FROM OPENJSON(@__ids_0) WITH ([value] int '$') AS [i]
)
New behavior
Starting with EF Core 10.0, parameterized collections are now translated using multiple scalar parameters by default:
SELECT [b].[Id], [b].[Name]
FROM [Blogs] AS [b]
WHERE [b].[Id] IN (@ids1, @ids2, @ids3)
Why
The new default translation provides the query planner with cardinality information about the collection, which can lead to better query plans in many scenarios. The multiple parameter approach balances between plan cache efficiency (by parameterizing) and query optimization (by providing cardinality).
However, different workloads may benefit from different translation strategies depending on collection sizes, query patterns, and database characteristics.
Mitigations
If you encounter issues with the new default behavior (such as performance regressions), you can configure the translation mode globally:
protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
=> optionsBuilder
.UseSqlServer("<CONNECTION STRING>",
o => o.UseParameterizedCollectionMode(ParameterTranslationMode.Constant));
Available modes are:
ParameterTranslationMode.MultipleParameters- The new default (multiple scalar parameters)ParameterTranslationMode.Constant- Inlines values as constants (pre-EF8 default behavior)ParameterTranslationMode.Parameter- Uses JSON array parameter (EF8-9 default)
You can also control the translation on a per-query basis:
// Use constants instead of parameters for this specific query
var blogs = await context.Blogs
.Where(b => EF.Constant(ids).Contains(b.Id))
.ToListAsync();
// Use a single parameter (e.g. JSON parameter with OPENJSON) instead of parameters for this specific query
var blogs = await context.Blogs
.Where(b => EF.Parameter(ids).Contains(b.Id))
.ToListAsync();
// Use multiple scalar parameters for this specific query. This is the default in EF 10, but is useful if the default was changed globally:
var blogs = await context.Blogs
.Where(b => EF.MultipleParameters(ids).Contains(b.Id))
.ToListAsync();
For more information about parameterized collection translation, see the documentation.
ExecuteUpdateAsync now accepts a regular, non-expression lambda
Old behavior
Previously, ExecuteUpdate accepted an expression tree argument (Expression<Func<...>>) for the column setters.
New behavior
Starting with EF Core 10.0, ExecuteUpdate now accepts a non-expression argument (Func<...>) for the column setters. If you were building expression trees to dynamically create the column setters argument, your code will no longer compile - but can be replaced with a much simpler alternative (see below).
Why
The fact that the column setters parameter was an expression tree made it quite difficult to do dynamic construction of the column setters, where some setters are only present based on some condition (see Mitigations below for an example).
Mitigations
Code that was building expression trees to dynamically create the column setters argument will need to be rewritten - but the result will be much simpler. For example, let's assume we want to update a Blog's Views, but conditionally also its Name. Since the setters argument was an expression tree, code such as the following needed to be written:
// Base setters - update the Views only
Expression<Func<SetPropertyCalls<Blog>, SetPropertyCalls<Blog>>> setters =
s => s.SetProperty(b => b.Views, 8);
// Conditionally add SetProperty(b => b.Name, "foo") to setters, based on the value of nameChanged
if (nameChanged)
{
var blogParameter = Expression.Parameter(typeof(Blog), "b");
setters = Expression.Lambda<Func<SetPropertyCalls<Blog>, SetPropertyCalls<Blog>>>(
Expression.Call(
instance: setters.Body,
methodName: nameof(SetPropertyCalls<Blog>.SetProperty),
typeArguments: [typeof(string)],
arguments:
[
Expression.Lambda<Func<Blog, string>>(Expression.Property(blogParameter, nameof(Blog.Name)), blogParameter),
Expression.Constant("foo")
]),
setters.Parameters);
}
await context.Blogs.ExecuteUpdateAsync(setters);
Manually creating expression trees is complicated and error-prone, and made this common scenario much more difficult than it should have been. Starting with EF 10, you can now write the following instead:
await context.Blogs.ExecuteUpdateAsync(s =>
{
s.SetProperty(b => b.Views, 8);
if (nameChanged)
{
s.SetProperty(b => b.Name, "foo");
}
});
Complex type column names are now uniquified
Old behavior
Previously, when mapping complex types to table columns, if multiple properties in different complex types had the same column name, they would silently share the same column.
New behavior
Starting with EF Core 10.0, complex type column names are uniquified by appending a number at the end if another column with the same name exists on the table.
Why
This prevents data corruption that could occur when multiple properties are unintentionally mapped to the same column.
Mitigations
If you need multiple properties to share the same column, configure them explicitly:
modelBuilder.Entity<Customer>(b =>
{
b.ComplexProperty(c => c.ShippingAddress, p => p.Property(a => a.Street).HasColumnName("Street"));
b.ComplexProperty(c => c.BillingAddress, p => p.Property(a => a.Street).HasColumnName("Street"));
});
Nested complex type properties use full path in column names
Old behavior
Previously, properties on nested complex types were mapped to columns using just the declaring type name. For example, EntityType.Complex.NestedComplex.Property was mapped to column NestedComplex_Property.
New behavior
Starting with EF Core 10.0, properties on nested complex types use the full path to the property as part of the column name. For example, EntityType.Complex.NestedComplex.Property is now mapped to column Complex_NestedComplex_Property.
Why
This provides better column name uniqueness and makes it clearer which property maps to which column.
Mitigations
If you need to maintain the old column names, configure them explicitly:
modelBuilder.Entity<EntityType>()
.ComplexProperty(e => e.Complex)
.ComplexProperty(o => o.NestedComplex)
.Property(c => c.Property)
.HasColumnName("NestedComplex_Property");
IDiscriminatorPropertySetConvention signature changed
Old behavior
Previously, IDiscriminatorPropertySetConvention.ProcessDiscriminatorPropertySet took IConventionEntityTypeBuilder as a parameter.
New behavior
Starting with EF Core 10.0, the method signature changed to take IConventionTypeBaseBuilder instead of IConventionEntityTypeBuilder.
Why
This change allows the convention to work with both entity types and complex types.
Mitigations
Update your custom convention implementations to use the new signature:
public virtual void ProcessDiscriminatorPropertySet(
IConventionTypeBaseBuilder typeBaseBuilder, // Changed from IConventionEntityTypeBuilder
string name,
Type type,
MemberInfo memberInfo,
IConventionContext<IConventionProperty> context)
IRelationalCommandDiagnosticsLogger methods add logCommandText parameter
Old behavior
Previously, methods on IRelationalCommandDiagnosticsLogger such as CommandReaderExecuting, CommandReaderExecuted, CommandScalarExecuting, and others accepted a command parameter representing the database command being executed.
New behavior
Starting with EF Core 10.0, these methods now require an additional logCommandText parameter. This parameter contains the SQL command text that will be logged, which may have sensitive data redacted when EnableSensitiveDataLogging() is not enabled.
Why
This change supports the new feature to redact inlined constants from logging by default. When EF inlines parameter values into SQL (e.g., when using EF.Constant()), those values are now redacted from logs unless sensitive data logging is explicitly enabled. The logCommandText parameter provides the redacted SQL for logging purposes, while the command parameter contains the actual SQL that gets executed.
Mitigations
If you have a custom implementation of IRelationalCommandDiagnosticsLogger, you'll need to update your method signatures to include the new logCommandText parameter. For example:
public InterceptionResult<DbDataReader> CommandReaderExecuting(
IRelationalConnection connection,
DbCommand command,
DbContext context,
Guid commandId,
Guid connectionId,
DateTimeOffset startTime,
string logCommandText) // New parameter
{
// Use logCommandText for logging purposes
// Use command for execution-related logic
}
The logCommandText parameter contains the SQL to be logged (with inlined constants potentially redacted), while command.CommandText contains the actual SQL that will be executed against the database.
Microsoft.Data.Sqlite breaking changes
Summary
High-impact changes
Using GetDateTimeOffset without an offset now assumes UTC
Old behavior
Previously, when using GetDateTimeOffset on a textual timestamp that did not have an offset (e.g., 2014-04-15 10:47:16), Microsoft.Data.Sqlite would assume the value was in the local time zone. I.e. the value was parsed as 2014-04-15 10:47:16+02:00 (assuming local time zone was UTC+2).
New behavior
Starting with Microsoft.Data.Sqlite 10.0, when using GetDateTimeOffset on a textual timestamp that does not have an offset, Microsoft.Data.Sqlite will assume the value is in UTC.
Why
Is is to align with SQLite's behavior where timestamps without an offset are treated as UTC.
Mitigations
Code should be adjusted accordingly.
As a last/temporary resort, you can revert to previous behavior by setting Microsoft.Data.Sqlite.Pre10TimeZoneHandling AppContext switch to true, see AppContext for library consumers for more details.
AppContext.SetSwitch("Microsoft.Data.Sqlite.Pre10TimeZoneHandling", isEnabled: true);
Writing DateTimeOffset into REAL column now writes in UTC
Old behavior
Previously, when writing a DateTimeOffset value into a REAL column, Microsoft.Data.Sqlite would write the value without taking the offset into account.
New behavior
Starting with Microsoft.Data.Sqlite 10.0, when writing a DateTimeOffset value into a REAL column, Microsoft.Data.Sqlite will convert the value to UTC before doing the conversions and writing it.
Why
The value written was incorrect, not aligning with SQLite's behavior where REAL timestamps are asummed to be UTC.
Mitigations
Code should be adjusted accordingly.
As a last/temporary resort, you can revert to previous behavior by setting Microsoft.Data.Sqlite.Pre10TimeZoneHandling AppContext switch to true, see AppContext for library consumers for more details.
AppContext.SetSwitch("Microsoft.Data.Sqlite.Pre10TimeZoneHandling", isEnabled: true);
Using GetDateTime with an offset now returns value in UTC
Old behavior
Previously, when using GetDateTime on a textual timestamp that had an offset (e.g., 2014-04-15 10:47:16+02:00), Microsoft.Data.Sqlite would return the value with DateTimeKind.Local (even if the offset was not local). The time was parsed correctly taking the offset into account.
New behavior
Starting with Microsoft.Data.Sqlite 10.0, when using GetDateTime on a textual timestamp that has an offset, Microsoft.Data.Sqlite will convert the value to UTC and return it with DateTimeKind.Utc.
Why
Even though the time was parsed correctly it was dependent on the machine-configured local time zone, which could lead to unexpected results.
Mitigations
Code should be adjusted accordingly.
As a last/temporary resort, you can revert to previous behavior by setting Microsoft.Data.Sqlite.Pre10TimeZoneHandling AppContext switch to true, see AppContext for library consumers for more details.
AppContext.SetSwitch("Microsoft.Data.Sqlite.Pre10TimeZoneHandling", isEnabled: true);