Showing posts with label Beginner SQL Tutorial. Show all posts
Showing posts with label Beginner SQL Tutorial. Show all posts

18 Dec 2025

SQL Server 2025 Express Edition Download, Install and Configure

 Problem

I heard there is a new version of SQL Server Express Edition with the 2025 release. What is new in this edition? Is it hard to install and configure SQL Server 2025 Express Edition?


Solution

In this tip, we will show how to download, install, and configure SQL Server 2025 Express Edition.

What is SQL Server 2025 Express Edition?

SQL Server 2025 Express Edition is the new SQL Server free version that can be used in production environments. In SQL Server, you have several SQL Server editions for Windows (there are Linux installers as well). The most feature rich version for production is Enterprise Edition. The Express Edition has limitations, but it is free. You cannot take advantage of all the features available in the Enterprise and Developer editions, but it is a cost-effective database the companies use for particular use cases.

Why would I use the SQL Server Express Edition?

SQL Server Express Edition is a great free edition with several features that you can use in production for small or distributed workloads. Also, you can start to learn about the database engine with this version, and after some time, upgrade to a more sophisticated edition such as Standard or Enterprise Edition.

How to download SQL Server Express Edition 2025?

First, download the SQL Server edition from the following URL: https://www.microsoft.com/en-us/sql-server/sql-server-downloads

Secondly, click the Download now option under the SQL Server 2025 Express edition title.

download sql server 2025 express

This option will download an installer named SQL2025-SSEI-Expr.exe.  This file is the installer we will use to install the SQL Server Express edition.

Step-by-Step Guide to install SQL Server Express Edition 2025

To install SQL Server Express edition, follow these steps.

First, run the SQL2025-SSEI-Expr.exe.

Second, select the installation option.

The basic configuration will install with the default features and is the faster option. The Custom installation is used to select the features to install. We will use this section to customize our installation. Use the Download Media option if you just want to download the software, but you want to install it later.

download sql server 2025 express

Third, you have to select the location to install SQL Server. Press the Browse button if you want to install in a different path than the default one. Once selected, press the install button.

download sql server 2025 express

After that, the installer will download the installation package.

download sql server 2025 express

SQL Server Installation Center

After downloading the SQL Server Express Edition 2025, a SQL Server Installation Center will pop up. First, click the New SQL Server standalone installation or add features option.

SQL Server Installation Center

Read and accept the license terms and if you agree proceed with the installation.

sql server 2025 express license terms

Check for updates and press Next.

sql server 2025 express microsoft update

The installer will check whether your machine meets the requirements for installation. It will check the registries, the ports, and the Firewall configuration. In addition, it will verify if the .NET Framework 4.7.2 or newer is installed.

sql server 2025 express install rules

Installation Type

In addition, you will have the installation type.

You have 2 options here:

  • To perform a new installation (or scenario).
  • To add features to an existing instance – when you already have SQL Server Express 2025 installed and you want to install additional features not installed at the beginning.
sql server 2025 express installation type

Additionally, you can connect to Azure. In this tip, we will focus on an on-premises SQL Server Express install. If you want to learn more about Azure, refer to this link.

sql server 2025 express azure extension

Feature Selection

At this stage, we have the features. Let me explain each one:

  • The Database engine is the core of SQL Server to create the database and run queries, control transactions, and security, etc.
  • Secondly, we have the SQL Server Replication. It is used to replicate data and database objects from one database to another database on a different server or instance.
  • AI Services and Language Extensions are to run Python code or R in T-SQL.
  • Full-text and Semantic Extractions are used to create special indexes to search and look for data in text columns or binary data in a database.
  • Also, we have PolyBase. It is used to query external data as local tables. For example, you can connect to data in Teradata, MongoDB, or access Parquet, JSON files, and query as if they were local tables.
  • Finally, we can install the LocalDB. This is a light database designed for developers to test features and functionality. Additionally, you can select the root directory of your instance.
sql server 2025 express feature selection

Once done, you can add an instance name for your instance. Press Next after entering the instance name.

sql server 2025 express instance configuration

Server Configuration

In the Server Configuration window, you can verify the Accounts created for the service and check the Startup Type for the services. Here you have an explanation for each option:

  • Automatic: Starts every time Windows boots and is available automatically.
  • Manual: Stays stopped at startup and only starts if a user starts it manually.
  • Disabled: It doesn’t start at all, even if a program or a user tries to start it, until it is manually enabled.

The Grant Perform Volume Maintenance Task Privilege is used to increase the performance of creating databases and expand the datafile expansion. In several cases, it can increase the database restoration time and the autogrowth.

However, some data can be exposed with this option. For more information about this topic, refer to this link.

sql server 2025 express server configuration

Database Engine Configuration

Following this step, we have the Database Engine Configuration window. You can specify if you want to log in using Windows authentication or Mixed mode (SQL Server logins and Windows logins as well). For security reasons, Windows authentication is the recommended option. You can also create a system administrator (sa user) and set up the password. This option is not recommended either for security reasons.

For more information about Windows authentication and Mixed mode, and as a sa user, refer to these links:

Also, you have the chance to add or remove Administrator users for the database.

sql server 2025 express database engine configuration

After that, you can select the path for the following directories:

  • First, the data root directory is the root of all the files (data files, log files, backups, configuration files and folders).
  • Secondly, the system database directory is used to store the system databases, like the mastermodeltempdb, and msdb.
  • Thirdly, we have the user database directory. This is the path where the user data files are created by default.
  • Also, we have the database log file directory. This is where the transaction log files are stored. The logs store the transaction information about the transactions in SQL Server, like insert, update, and delete operations.
  • Finally, we have the backup directory. We store the backups in that directory.
sql server 2025 express database engine configuration

TempDB Setup

The TempDB page is used to configure the Temporal database (tempdb).  It can be used for operations that require a large amount of memory, also to create temporary tables.

You can configure tempdb during the installation to increase the database performance. For more information about the tempdb performance

sql server 2025 express database engine configuration

The memory page is used to allocate the maximum and minimum amount of memory to SQL Server. For more information about recommended settings

sql server 2025 express database engine configuration

We also have a tab for User Instances, which allows users to run a separate instance of the database. The Filestream is used to store non-structured data like documents and images.

sql server 2025 express database engine configuration

After that, press Next, and the installer will run the package to install the components and options selected.

sql server 2025 express installation progress

If everything is OK, a Complete message will be displayed.

sql server 2025 express setup complete

A restart message will be displayed.

computer restart required

At this point, SQL Server should now be installed and ready for use.

SQL Express 2025 Edition Limitations

SQL Server Express Edition only supports databases up to 50 GB, whereas Enterprise Edition supports databases up to 524 PB. Also, SQL Server Express Edition supports 1 socket or 4 cores.

The Maximum memory for the buffer pool per instance is 1410 MB. In addition, SQL Server Express Edition does not support Always On Failover Clusters or Availability Groups.


12 Dec 2025

Best SQL Query to Find Duplicates – Step by Step Guide

 inding duplicate rows is one of the most common tasks in SQL, especially when working with real-world data like customer records, invoices, transactions, or log data.

In this article, you will learn the best SQL queries to identify duplicates, how to remove them, and how to prevent them in the future.

This is a beginner-friendly and practical tutorial you can use in SQL Server, MySQL, Oracle, PostgreSQL, and even BigQuery.


🔍 Why Finding Duplicates Is Important

Duplicate rows can create multiple problems:

  • Wrong totals in reports

  • Incorrect customer counts

  • Issues in billing

  • Slow performance

  • Errors in downstream systems (Power BI, ETL, APIs)

Cleaning duplicates is one of the first tasks in data quality improvement.


1. Find Duplicates Using GROUP BY (Best & Easiest Method)

This is the most common and universal method.

Query:

SELECT column1, column2, COUNT(*) AS duplicate_count FROM table_name GROUP BY column1, column2 HAVING COUNT(*) > 1;

✔ When to use:

  • When you want to know which values are duplicated

  • Works in SQL Server, MySQL, Oracle, PostgreSQL, BigQuery


📌 Example: Find Duplicate Email IDs

Suppose you have a table:

The email raj@gmail.com is repeated.

Query:

SELECT email, COUNT(*) AS duplicate_count FROM users GROUP BY email HAVING COUNT(*) > 1;

Output:

emailduplicate_count
 
        raj@gmail.com

🔥 3. Find Duplicates Using ROW_NUMBER() (Best for Deleting)

This is the most powerful method.
It uses window functions and gives a row number to each duplicate group.

Query:

WITH cte AS ( SELECT *, ROW_NUMBER() OVER (PARTITION BY email ORDER BY id) AS rn FROM users ) SELECT * FROM cte WHERE rn > 1;

✔ When to use:

  • When you need to delete only duplicates and keep 1 record

  • When you want to mark duplicates


🗑 4. Delete Duplicate Rows (Keep Only One)

Using the same CTE:

WITH cte AS ( SELECT *, ROW_NUMBER() OVER (PARTITION BY email ORDER BY id) AS rn FROM users ) DELETE FROM cte WHERE rn > 1;

This deletes all duplicate rows but keeps the first row.


🧠 5. Find Duplicates by Date or Conditional Columns

Example: Find duplicates only for rows created today.

SELECT name, email, created_date, COUNT(*) FROM users WHERE created_date = CURRENT_DATE GROUP BY name, email, created_date HAVING COUNT(*) > 1;

This is useful for ETL loads and daily imports.


🛑 6. Prevent Duplicates Using Constraints

Best long-term solution.

In SQL Server / MySQL / PostgreSQL:

ALTER TABLE users ADD CONSTRAINT uq_email UNIQUE (email);

In Oracle:

ALTER TABLE users ADD CONSTRAINT uq_email UNIQUE (email);

After this, the database will automatically block duplicate values.


📸 Screenshots You Should Add (To Increase SEO + AdSense Earnings)

You can add these screenshots:

  • SSMS window showing query

  • Duplicate rows in a sample table

  • Execution result window

(If you want, I can generate clean screenshots for you.)


FAQ – Frequently Asked Questions

1. What is the fastest way to find duplicates in SQL?

Using GROUP BY + HAVING COUNT(*) > 1 is the simplest and fastest method.


2. Does ROW_NUMBER() find duplicates?

Yes. ROW_NUMBER assigns sequence numbers to rows allowing you to identify duplicate rows easily.


3. Will deleting duplicates improve performance?

Yes.
Removing duplicates improves:

  • Query performance

  • Joins

  • Aggregations

  • ETL workflows


4. How to avoid duplicates permanently?

Use a UNIQUE constraint or PRIMARY KEY on important columns like email, mobile, employee code, etc.


5. Does this work in all SQL databases?

Yes — all examples work in:

  • SQL Server

  • MySQL

  • PostgreSQL

  • Oracle

  • BigQuery


⭐ Final Thoughts

Finding and removing duplicates is a critical part of database maintenance.
By using the queries shown above—GROUP BY, ROW_NUMBER, and unique constraints—you can quickly identify and eliminate duplicate data efficiently.

If you want more SQL tutorials, feel free to browse my other posts or ask in the comments!


👉 Want more articles like this for your blog?

I can write complete SEO-optimized posts on:

  • SQL Server

  • MySQL

  • BigQuery

  • Oracle

  • DBeaver

  • Power BI

  • ASP.NET

  • Data analysis topics

Just tell me your next topic!

7 Dec 2025

FULL PYTHON + MS SQL SCRIPT (BEST PRACTICE)

 

FULL PYTHON + MS SQL SCRIPT (BEST PRACTICE)

import pyodbc


# -----------------------------------------

# SQL CONNECTION (EDIT THIS PART)

# -----------------------------------------

def get_connection():

    try:

        conn = pyodbc.connect(

            "DRIVER={ODBC Driver 17 for SQL Server};"

            "SERVER=YOUR_SERVER_NAME;"      # e.g. DESKTOP-123\SQLEXPRESS

            "DATABASE=YOUR_DATABASE_NAME;"

            "UID=YOUR_USERNAME;"

            "PWD=YOUR_PASSWORD;",

            autocommit=False                # Manual commit = safer

        )

        return conn

    except Exception as ex:

        print("Database connection failed:", ex)

        return None



# -----------------------------------------

# INSERT RECORD

# -----------------------------------------

def insert_employee(emp_id, name, salary):

    conn = get_connection()

    if not conn:

        return

    try:

        cursor = conn.cursor()

        query = """

            INSERT INTO Employees (EmpID, Name, Salary)

            VALUES (?, ?, ?)

        """

        cursor.execute(query, (emp_id, name, salary))

        conn.commit()

        print("Record inserted successfully!")

    except Exception as ex:

        conn.rollback()

        print("Insert failed:", ex)

    finally:

        conn.close()



# -----------------------------------------

# UPDATE RECORD

# -----------------------------------------

def update_employee(emp_id, new_salary):

    conn = get_connection()

    if not conn:

        return

    try:

        cursor = conn.cursor()

        query = """

            UPDATE Employees

            SET Salary = ?

            WHERE EmpID = ?

        """

        cursor.execute(query, (new_salary, emp_id))

        conn.commit()

        print("Record updated successfully!")

    except Exception as ex:

        conn.rollback()

        print("Update failed:", ex)

    finally:

        conn.close()



# -----------------------------------------

# DELETE RECORD

# -----------------------------------------

def delete_employee(emp_id):

    conn = get_connection()

    if not conn:

        return

    try:

        cursor = conn.cursor()

        query = "DELETE FROM Employees WHERE EmpID = ?"

        cursor.execute(query, (emp_id,))

        conn.commit()

        print("Record deleted successfully!")

    except Exception as ex:

        conn.rollback()

        print("Delete failed:", ex)

    finally:

        conn.close()



# -----------------------------------------

# SELECT ALL RECORDS

# -----------------------------------------

def get_all_employees():

    conn = get_connection()

    if not conn:

        return

    try:

        cursor = conn.cursor()

        cursor.execute("SELECT EmpID, Name, Salary FROM Employees")


        print("\n--- Employee List ---")

        for row in cursor.fetchall():

            print(row.EmpID, row.Name, row.Salary)


    except Exception as ex:

        print("Select failed:", ex)

    finally:

        conn.close()



# -----------------------------------------

# CALL STORED PROCEDURE

# -----------------------------------------

def call_stored_procedure():

    """

    Example Stored Procedure:

    CREATE PROCEDURE GetEmployees

    AS

    BEGIN

        SELECT * FROM Employees

    END

    """

    conn = get_connection()

    if not conn:

        return

    try:

        cursor = conn.cursor()

        cursor.execute("{CALL GetEmployees}")   # or EXEC GetEmployees

        rows = cursor.fetchall()


        print("\n--- Stored Procedure Result ---")

        for row in rows:

            print(row.EmpID, row.Name, row.Salary)


    except Exception as ex:

        print("Stored procedure failed:", ex)

    finally:

        conn.close()



# -----------------------------------------

# MAIN TESTING

# -----------------------------------------

if __name__ == "__main__":

    insert_employee(1, "Vijay", 50000)

    update_employee(1, 60000)

    get_all_employees()

    delete_employee(1)

    get_all_employees()

    call_stored_procedure()

🚀 How to Use

  1. Update:

SERVER= DATABASE= UID= PWD=
  1. Create sample table in SQL:

CREATE TABLE Employees ( EmpID INT PRIMARY KEY, Name VARCHAR(100), Salary DECIMAL(18,2) );
  1. Run the Python file:

python mssql_python_demo.py

4 Dec 2025

Copilot in SSMS provides AI assistance similar to ChatGPT directly within SQL Server Management Studio (SSMS)

 Requirements

SQL Server Management Studio 21 or later is required, along with an Azure OpenAI endpoint and deployment from Azure AI Foundry. Azure authentication is recommended over API keys for security.​


Installation Steps

Launch the Visual Studio Installer, select your SSMS installation, and choose Modify. Under the AI Assistance workload, ensure Copilot in SSMS is selected, then click Modify to install.​


Configuration Steps

Open SSMS, click the Copilot button on the SQL Editor toolbar (or use View > Copilot or Ctrl+Alt+C). Enter your Azure OpenAI Endpoint, Deployment (e.g., gpt-4o), and API Key if needed, then select Launch Copilot. Authenticate via Microsoft Entra ID if prompted, and adjust settings under Tools > Options > Copilot.​


Usage

Connect to your database in a query editor, open the Copilot chat window, and ask natural language questions about your schema or T-SQL code. It supports SQL Server, Azure SQL, and related platforms but review outputs for accuracy. No direct ChatGPT integration exists without this Azure setup

SQL Server 2025 introduces vector data types and functions

 SQL Server 2025 introduces vector data types and functions, enabling native AI-driven similarity searches and machine learning tasks directly in SQL queries without external tools.​


Key AI Features

Vector data types store embeddings as optimized binary JSON arrays for efficient operations like distance calculations (e.g., VECTOR_DISTANCE, VECTOR_NORM). New vector indexes support approximate nearest neighbor searches, accelerating AI workloads. External AI model management allows invoking REST endpoints for embeddings via sp_invoke_external_rest_endpoint.​


Developer Enhancements

Regular expressions functions like REGEXP_LIKE and REGEXP_REPLACE enable pattern matching and text manipulation in queries. Fuzzy string matching with EDIT_DISTANCE_SIMILARITY computes similarity scores for approximate searches. Change event streaming captures real-time DML changes to Azure Event Hubs in JSON or Avro formats.​


Performance Improvements

Optimized locking reduces blocking and memory use, while tempdb governance prevents space exhaustion from runaway jobs. Cardinality estimation feedback for expressions adapts query plans across executions, and optional parameter plan optimization (OPPO) handles varying parameters dynamically. These align with 2025 trends like AI integration and query tuning for modern data stacks

1 Dec 2025

Cloud Integration and Hybrid Data Solutions

 Cloud integration and hybrid data solutions involve combining on-premises infrastructure with cloud services to create flexible, scalable, and efficient data environments. This approach allows organizations to leverage the strengths of both local (on-premises) and cloud-based systems, enhancing performance, availability, and data management capabilities.

Here’s a deeper dive into Cloud Integration and Hybrid Data Solutions:

1. Hybrid Cloud Architecture

A hybrid cloud architecture combines private (on-premises) and public cloud environments. It allows data and applications to move between these environments, enabling:

  • Flexibility: Workloads can be managed dynamically across on-premises and cloud environments based on performance, cost, or security needs.
  • Cost Efficiency: You can keep sensitive or high-priority data on-premises while using the cloud for scalability and cost-effective storage.
  • Disaster Recovery & High Availability: Data can be replicated or backed up in the cloud, ensuring business continuity in case of an on-premises failure.

2. Data Integration Strategies

Organizations can integrate data from multiple sources (on-premises, cloud databases, applications, IoT devices, etc.) to create a unified data platform. Key methods include:

  • ETL/ELT: Extract, Transform, Load (ETL) or Extract, Load, Transform (ELT) pipelines move data between sources. ELT is more cloud-friendly as transformations occur after the data is loaded.
  • Data Replication: Continuous replication of on-premises data to the cloud ensures data synchronization, providing low-latency access across environments.
  • APIs: APIs connect various services, allowing applications in different environments to communicate seamlessly.

3. Cloud Data Services

Cloud platforms like Microsoft Azure, Amazon Web Services (AWS), and Google Cloud offer robust data solutions:

  • Azure Synapse Analytics: Combines big data and data warehousing, offering real-time analytics over large datasets, whether stored on-premises or in the cloud.
  • AWS Glue: Serverless data integration service that makes it easy to prepare data for analytics by combining data from various sources.
  • Google BigQuery: Serverless data warehouse with built-in machine learning, designed for handling vast amounts of data across hybrid environments.

4. Multi-Cloud Strategy

Some organizations adopt a multi-cloud approach, using services from more than one cloud provider to avoid vendor lock-in, optimize costs, and improve redundancy. A well-executed multi-cloud strategy offers:

  • Interoperability: Data and services work seamlessly across different cloud providers.
  • Data Portability: Simplified movement of workloads between different cloud environments.
  • Compliance & Regulation: Certain clouds may be chosen for specific workloads based on regional compliance or data residency requirements.

5. Edge Computing

Edge computing complements cloud and hybrid models by processing data closer to its source (e.g., IoT devices or local servers). This reduces latency and bandwidth costs, especially when processing time-sensitive data.

  • Hybrid Edge Architecture: Combines edge computing with cloud services, sending processed data to the cloud for storage or further analysis while keeping latency-critical operations local.
  • Use Cases: Real-time monitoring, predictive maintenance, and industrial automation.

6. Data Virtualization

Data virtualization allows for real-time access and query capabilities across diverse data sources without physically moving data. This reduces complexity in hybrid cloud scenarios, providing:

  • Unified Data View: Access and manipulate data from multiple sources (cloud, on-premises, external) without duplication.
  • Real-Time Analytics: Execute analytics directly on distributed datasets without the need for extensive ETL processes.

7. Security and Governance in Hybrid Solutions

Security remains a critical concern in hybrid solutions. Organizations need to implement:

  • Data Encryption: Data must be encrypted both in transit (between environments) and at rest.
  • Identity & Access Management (IAM): Ensure proper authentication and role-based access control (RBAC) for users across both cloud and on-premises environments.
  • Compliance: Hybrid solutions need to meet regulatory standards (e.g., GDPR, HIPAA), especially when moving sensitive data between environments.

8. Cloud-Native Technologies in Hybrid Solutions

  • Containers & Kubernetes: Kubernetes orchestrates containers across hybrid cloud environments, enabling portability and consistency in application deployment.
  • Serverless Functions: Services like AWS Lambda, Azure Functions, and Google Cloud Functions allow code to run without provisioning servers, providing scalable compute in hybrid setups.
  • Microservices Architecture: Enables the development of applications as small, independently deployable services. Microservices work well in hybrid environments, allowing specific services to run in the cloud while others remain on-premises.

9. Benefits of Hybrid Data Solutions

  • Scalability: Utilize the cloud to handle spikes in demand while maintaining core workloads on-premises.
  • Cost Control: Manage expenses by leveraging cloud resources dynamically and reducing dependency on expensive hardware.
  • Innovation & Agility: Experiment with new cloud services (like machine learning, AI, or advanced analytics) without disrupting core on-premises operations.
  • Data Sovereignty: Maintain control over sensitive data by keeping it on-premises while using the cloud for less critical data or compute-heavy tasks.

10. Real-World Use Cases

  • Healthcare: Hybrid solutions allow sensitive patient data to remain on-premises while analytics and machine learning run in the cloud.
  • Financial Services: Banks use hybrid architectures to comply with regulations that mandate data residency while leveraging cloud platforms for AI-driven risk assessment or fraud detection.
  • Manufacturing: Edge devices collect data on the factory floor, process it locally, and send summary data to the cloud for further analysis.

By combining the strengths of on-premises systems and cloud platforms, cloud integration and hybrid data solutions offer businesses a path to scalable, secure, and cost-effective data management, enabling them to handle modern workloads and adapt to evolving technology landscapes.

Intelligent Query Processing (IQP)

 Intelligent Query Processing (IQP) is a suite of features introduced in SQL Server to optimize query performance with minimal changes to existing code. It is designed to automatically improve query performance by adapting to runtime conditions, reducing the need for manual query tuning. IQP is part of the broader Intelligent Database concept and includes several enhancements aimed at improving the performance of queries across various workloads.

Here are the key features of Intelligent Query Processing:

1. Batch Mode on Rowstore (SQL Server 2019)

  • Previously, batch mode execution was only available for columnstore indexes. Now, SQL Server 2019 extends batch mode processing to rowstore tables, providing performance improvements for analytical queries on traditional rowstore tables.

2. Table Variable Deferred Compilation (SQL Server 2019)

  • In prior versions, table variables were always assumed to have a single row at compile time, leading to suboptimal query plans when they contained more data. SQL Server 2019 defers the compilation of table variables until runtime when actual row counts are known, improving query performance.

3. Approximate Query Processing (SQL Server 2019)

  • The APPROX_COUNT_DISTINCT function is introduced to provide faster approximate distinct count calculations for large datasets. This is useful in cases where exact precision is not required but performance is critical, such as in large analytical workloads.

4. Scalar UDF Inlining (SQL Server 2019)

  • Scalar user-defined functions (UDFs) often caused performance bottlenecks because they were executed row-by-row. With inlining, SQL Server replaces the UDF calls with the function body, allowing the entire query to be optimized as a single unit and improving performance significantly.

5. Adaptive Joins (SQL Server 2017)

  • SQL Server can now switch dynamically between Nested Loops and Hash Joins during query execution based on the actual row count. This helps optimize join performance in cases where cardinality estimates may be inaccurate.

6. Interleaved Execution for Multi-Statement Table-Valued Functions (SQL Server 2017)

  • SQL Server can now defer execution of certain parts of the query plan involving multi-statement table-valued functions (MSTVF) until runtime when actual data sizes are known, leading to more accurate query plans and better performance.

7. Memory Grant Feedback (SQL Server 2017)

  • SQL Server adjusts the memory allocated for query execution based on feedback from prior executions. If a query requests too much or too little memory, the system learns and adjusts the memory grant for future executions, reducing memory contention and improving performance.

8. Batch Mode Memory Grant Feedback (SQL Server 2019)

  • Expanding on memory grant feedback, batch mode execution can now dynamically adjust memory grants, which is especially beneficial for large analytical queries, helping to avoid both under- and over-provisioning of memory.

9. Batch Mode Adaptive Joins (SQL Server 2019)

  • Similar to adaptive joins, this feature extends adaptive behavior to batch mode, allowing the system to switch between join strategies at runtime based on data distribution and actual row counts.

10. Cardinality Estimation Feedback (SQL Server 2022)

  • Introduced in SQL Server 2022, this feature helps improve the accuracy of query plans by updating cardinality estimates based on feedback from prior executions, optimizing for future runs.

11. Parameter Sensitivity Plan Optimization (SQL Server 2022)

  • Parameter sensitivity issues can cause performance problems due to a single query plan being used for different parameter values. This feature addresses the problem by generating and storing multiple plans for a single query based on parameter values, improving query performance and reducing bottlenecks.

12. Degree of Parallelism (DOP) Feedback (SQL Server 2022)

  • This feature optimizes the degree of parallelism used by queries, adjusting based on the actual runtime conditions. Over time, SQL Server learns and refines the parallelism strategy, improving the query's performance in future executions.

These IQP features help SQL Server adapt to changing workloads and data distribution, reducing manual tuning effort while improving performance across OLTP (Online Transaction Processing) and OLAP (Online Analytical Processing) workloads. They contribute to more efficient query execution by learning from actual data and workload behavior.

Query Store Enhancements

 Query Store in SQL Server has received several enhancements in recent versions, making it even more powerful for performance monitoring and query tuning. Here are some of the key enhancements:

1. Query Store for Read-Only Replicas (SQL Server 2019)

  • Query Store now supports capturing performance data on read-only replicas in Always On Availability Groups. This allows you to monitor and tune queries running on these replicas without impacting the primary.

2. Custom Capture Policies (SQL Server 2019)

  • You can define custom capture policies for Query Store, which allows you to control which queries are captured based on factors such as execution frequency, resource consumption, or duration. This helps reduce overhead by ignoring less important queries.

3. Automatic Plan Correction (SQL Server 2017+)

  • SQL Server can automatically detect and correct query performance regressions by reverting to an older query plan that is known to perform better. This is part of the Intelligent Query Processing (IQP) feature set.

4. Improved User Interface in SSMS

  • The Query Store UI in SQL Server Management Studio (SSMS) has improved significantly, offering better visualization for comparing query performance over time, analyzing plan changes, and more.

5. Hybrid Workloads and Query Store for Cloud Environments

  • Azure SQL Database and SQL Managed Instance also benefit from Query Store enhancements, such as the ability to monitor query performance across different environments (on-premises and cloud) and integrate with Azure monitoring tools.

6. Wait Statistics

  • Query Store now includes wait statistics, which help you understand the causes of performance bottlenecks related to waits (e.g., I/O, locks, CPU). This provides better insight into the actual root causes of performance issues.

7. Aggregated Query Data

  • Query Store now offers enhanced options for aggregating query data, making it easier to analyze query patterns and performance over longer periods of time without needing to focus on individual executions.

8. Query Store on a Per-Database Basis

  • You can now enable or disable Query Store at the database level, providing more control over where Query Store is used and reducing unnecessary overhead in less critical databases.

These enhancements help developers and DBAs monitor, troubleshoot, and optimize query performance more efficiently across various environments.

10 What is the difference between decode and case?

 DECODE and CASE are both conditional expressions used in SQL to perform conditional logic within queries. However, they differ in syntax, functionality, and availability across different database systems. Here's a breakdown of the differences:

1. Syntax

  • DECODE Syntax:



    DECODE(expression, search1, result1, search2, result2, ..., default)
    • expression: The value to be compared.
    • search: The value to compare against the expression.
    • result: The value to return if the search value matches the expression.
    • default: The value to return if none of the search values match the expression.
  • CASE Syntax:

    • Simple CASE Expression:

      CASE expression WHEN value1 THEN result1 WHEN value2 THEN result2 ... ELSE default END
    • Searched CASE Expression:

      CASE WHEN condition1 THEN result1 WHEN condition2 THEN result2 ... ELSE default END

2. Functionality

  • DECODE:

    • Acts like a simplified CASE expression.
    • Only compares a single expression against a series of possible values.
    • Stops evaluating once a match is found.
    • More compact and concise for simple equality checks.
    • Primarily available in Oracle databases.
  • CASE:

    • More powerful and flexible than DECODE.
    • Can handle multiple conditions and logical expressions, not just equality checks.
    • Available in almost all major relational database management systems (RDBMS), including Oracle, SQL Server, MySQL, and PostgreSQL.
    • Provides both a simple case (for equality checks) and a searched case (for more complex conditions).

3. Portability

  • DECODE:

    • Mostly specific to Oracle databases, so it is less portable across different RDBMS.
    • Not natively supported in SQL Server, MySQL, or PostgreSQL.
  • CASE:

    • Standard SQL and widely supported across different RDBMS.
    • More portable, making it a better choice for SQL scripts that need to work across different database systems.

4. Use Cases

  • DECODE:

    • Best for simple mappings and when working within Oracle.
    • Example:

      SELECT DECODE(status, 'A', 'Active', 'I', 'Inactive', 'Unknown') FROM users;
  • CASE:

    • Preferred for complex conditions, non-Oracle databases, and where portability is a concern.
    • Example:

      SELECT CASE WHEN status = 'A' THEN 'Active' WHEN status = 'I' THEN 'Inactive' ELSE 'Unknown' END FROM users;

5. Performance

  • Performance is typically comparable between the two for simple use cases, but CASE can be more optimized and versatile for complex scenarios.

Conclusion:

  • Use DECODE if you are working in Oracle and need a quick, simple mapping solution.
  • Use CASE for more complex conditions, for better portability, and when working with non-Oracle databases.

SQL Server 2025 Express Edition Download, Install and Configure

  Problem I heard there is a new version of SQL Server Express Edition with the 2025 release. What is new in this edition? Is it hard to ins...