Przygody z oprogramowaniem
  • Start
  • Szkolenia
    • Szkolenia otwarte
    • Katalog szkoleń
  • Usługi
    • Konsulting
    • Mentoring
    • Research & Development
  • Blog
  • Wiedza
    • Strefa wiedzy
    • BFsharp
    • SaaS
  • Klienci
  • Kontakt
0

PLSSUG Kraków – Metody dostępu w SQL Serverze

26 April, 2013-Blog, SQL

Zapraszam już w najbliższy poniedziałek na moją kolejną sesję poświęconą przetwarzaniu zapytań przez SQL Servera.

UPDATE: Start godzina 18.00.

PLSSUG
22 kwietnia, poniedziałek
18:00 – 19:00 | Metody dostępu w SQL Serverze | Michał Mac

Gdzie i jak baza danych przechowuje dane? Jakie są strategie pobierania danych? Dlaczego serwer wybrał tą metodę, a nie inną? Na jakiej podstawie to zrobił? Która metoda dostępu jest szybsza i dlaczego? Jak działają indeksy?

MIEJSCE:
Wyższa Szkoła Zarządzania i Bankowości w Krakowie, Al. Kijowska 14, 6K (parter)

Więcej szczegółów można znaleźć na stronie PLSSUG.

Zapraszam.

Czytaj dalej | Dyskutuj
0

SQL Server Optymalizacja – nagranie

26 March, 2013-Blog, SQL

Czytaj dalej | Dyskutuj
0

PLSSUG Kraków – Sesja Optymalizacja zapytań w SQL Server

8 February, 2013-Blog, SQL

Zapraszam już w najbliższą środę na moją sesję poświęconą przetwarzaniu zapytań przez SQL Servera.

PLSSUG
13 lutego, środa
18:00 – 19:00 | Optymalizacja zapytań w SQL Server | Michał Mac

Prezentacja będzie poświęcona różnym technikom, które pozwalają odpowiedzieć na pytanie, czy dane zapytanie jest wydajne. Co jest problemem? Jak czytać fizyczne plany zapytań? Jak porównać wydajność dwóch zapytań? Jakie są często spotykane problemy? Jak wykorzystać row_number oraz inne sposoby w celu poprawy wydajności? Podczas sesji zostaną zaprezentowane konkretne problemy, które będziemy rozwiązywali.

MIEJSCE:
Wyższa Szkoła Zarządzania i Bankowości w Krakowie, Al. Kijowska 14, 6K (parter)

Więcej szczegółów można znaleźć na stronie PLSSUG.

Zapraszam.

Czytaj dalej | Dyskutuj
0

SQL Editions Testing

1 June, 2009-SQL

Remember to test your applications not only using SQL Server Developer Edition. Check how it is working under different editionsversions.

Today I was helping one team to speedup a mysterious long running report. Under local testing environment the report took about 90 seconds to complete. However at the client site, it was unable to complete (it was stopped after 30 minutes or so). It was the same database and better hardware. Report seemed very simple – several joins, several aggregates and one big non-correlated subquery (joins and except).

It turned out to be the suboptimal plan generated by standard edition. The query was split, subquery saved to a temp table and then used in the outer query. Running time: 30 seconds.

Czytaj dalej | Dyskutuj
0

Non-clustered index maintenance

24 July, 2008-SQL

Non-clustered index are only physically updated when the value in the clustered index is changed. It can be seen in several places:

  • SET STATISTICS IO ON
  • sys.dm_db_index_operational_stats
  • Execution Plan
  • Transaction Locks

Let’s look at an example:

USE tempdb;

DROP TABLE T;
CREATE TABLE T (a INT PRIMARY KEY, b INT);
CREATE INDEX x_B ON T(b);
INSERT T VALUES(1, 1)

SELECT index_id, leaf_insert_count, leaf_update_count
FROM sys.dm_db_index_operational_stats (db_id(), object_id('T'), NULL, NULL)

Returns

index_id      leaf_insert_count      leaf_update_count
----------- -------------------- --------------------
1               1                            0
2               1                            0

There were 2 leaf inserts, 1 in clustered index and 1 in nonclustered – (1,1) record.

SET STATISTICS IO ON
SET STATISTICS PROFILE ON

BEGIN TRAN;
UPDATE T SET b = 7
  • Table ‘T’. Scan count 1, logical reads 6, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
  • Both indexes were updated.
    index_id      leaf_insert_count      leaf_update_count
    ----------- -------------------- --------------------
    1               1                           1
    2               2                           0
    
  • In the plan there are several Compute Scalar operators. Expr1003 is evaluated to 1 when the clustered index is changed. This value is used internally for the optimization.
    UPDATE [T] set [b] = @1
      |--Clustered Index Update(OBJECT: ([tempdb].[dbo].[T].[PK__T__2A363CC5]), OBJECT: ([tempdb].[dbo].[T].[x_B]), SET: ([tempdb].[dbo].[T].[b] = [Expr1014]))
           |--Compute Scalar(DEFINE: ([Expr1016]=[Expr1016]))
                |--Compute Scalar(DEFINE: ([Expr1016]=CASE WHEN [Expr1003] THEN (1) ELSE (0) END))
                     |--Compute Scalar(DEFINE: ([Expr1014]=(7)))
                          |--Compute Scalar(DEFINE: ([Expr1003]=CASE WHEN [tempdb].[dbo].[T].[b] = (7) THEN (1) ELSE (0) END))
                               |--Top(ROWCOUNT est 0)
                                    |--Clustered Index Scan(OBJECT: ([tempdb].[dbo].[T].[PK__T__2A363CC5]))
    
  • index_id  request_mode  request_type  request_status
    --------- ------------- ------------- --------------
    1         IX            LOCK          GRANT
    2         IX            LOCK          GRANT
    2         X             LOCK          GRANT
    2         X             LOCK          GRANT
    1         X             LOCK          GRANT
    

Now, let’s update the row with the same value

COMMIT;

BEGIN TRAN;
UPDATE T SET b = 7;

    Table ‘T’. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

  • Only the clustered index was updated
    index_id      leaf_insert_count      leaf_update_count
    ----------- -------------------- --------------------
    1               1                           2
    2               2                           0
    
  • Only locks on the clustered index were taken
    index_id  request_mode  request_type  request_status
    --------- ------------- ------------- --------------
    1         IX            LOCK          GRANT
    1         X             LOCK          GRANT
    
Czytaj dalej | Dyskutuj
0

Quick variable init and compound assignment

25 February, 2008-SQL

Handy new features in Katmai. I think there is no description required. 🙂

DECLARE @a INT = 2;
SELECT @a;
SET @a += 4;
SELECT @a;
Czytaj dalej | Dyskutuj
0

Islands

3 October, 2007-SQL

The task for today is to get existing continuous ranges – islands.

Setup script

USE tempdb;

CREATE TABLE T1 ( a INT );

BEGIN TRAN
SET NOCOUNT ON;

DECLARE @i INT;
SET @i = 1;

WHILE ( @i <= 1000 )
BEGIN
    INSERT t1 VALUES ( @i );
    SET @i = @i + 1;
END;

DELETE t1 WHERE a in ( 4, 5, 7, 20, 21 );
SET NOCOUNT OFF;
COMMIT;

Output

There are 4 ranges: 1-3, 6, 8-19, 22-1000.

Solution

We want to assign a unique value for each range so that we can separate islands. How to generate this unique value? What’s the difference between two equally rising sequences? It’s constant 🙂 so let’s use row_number.

SELECT a, a - ROW_NUMBER() OVER ( ORDER BY a ) AS R
FROM t1;

You now know what to do next. Aggregate on the newly created value and than simply select MIN and MAX which are the start and the end of a range respectively.

SELECT MIN(a) AS 'start', MAX(a) AS 'end'
FROM
(
    SELECT a, a - ROW_NUMBER() OVER ( ORDER BY a ) AS R
    FROM t1
)t
GROUP BY R

Plan

The plan for this query is pretty linear which is good, one table scan.

SELECT MIN(a) AS 'start', MAX(a) AS 'end'  FROM  (   SELECT a, a - ROW_NUMBER() OVER ( ORDER BY a ) AS R   FROM t1  ) t  GROUP BY R
  |--Hash Match(Aggregate, HASH: ([Expr1005]), RESIDUAL: ([Expr1005] = [Expr1005]) DEFINE: ([Expr1006]=MIN([tempdb].[dbo].[T1].[a]), [Expr1007]=MAX([tempdb].[dbo].[T1].[a])))
       |--Compute Scalar(DEFINE: ([Expr1005]=CONVERT_IMPLICIT(bigint,[tempdb].[dbo].[T1].[a],0)-[Expr1004]))
            |--Sequence Project(DEFINE: ([Expr1004]=row_number))
                 |--Compute Scalar(DEFINE: ([Expr1011]=(1)))
                      |--Segment
                           |--Sort(ORDER BY: ([tempdb].[dbo].[T1].[a] ASC))
                                |--Table Scan(OBJECT: ([tempdb].[dbo].[T1]))
Czytaj dalej | Dyskutuj
0

Ranking Functions – the basics

2 October, 2007-SQL

This is a first post in the series about advanced query techniques which make pretty cool use of ranking functions. Today, I will present the basics.

Starting from SQL Server 2005 we have 4 ranking functions: RANK, DENSE_RANK, NTITLE, ROW_NUMBER. Let’s start with an example:

SELECT LastName, ROW_NUMBER() OVER ( ORDER BY LastName ) AS Number
FROM Person.Contact 

Here we numbered LastNames starting at 1 in LastName order.
Now, we want to achieve the same effect but to restart for each group (partition) with different first letter.

SELECT LastName, ROW_NUMBER() OVER ( PARTITION BY SUBSTRING( LastName, 1, 1 ) ORDER BY LastName ) AS Number
FROM Person.Contact 

Every ranking function supports PARTITION BY clause which divides the result set into partitions and ORDER BY which determines the order in which numbers are applied to the rows.

ROW_NUMBER – returns the sequential number of a row within a partition of a result set, starting at 1 for the first row in each partition.
RANK – returns the rank of each row within the partition of a result set. If two or more rows tie for a rank, each tied rows receives the same rank. For example, two rows with the same LastName are both ranked one. The next LastName is ranked three.
DENSE_RANK – works almost as RANK, the only exception is that the numbers do not have gaps and always have consecutive ranks.
NTILE – distributes the rows into a specified number of groups. For each row, returns the number of the group to which the row belongs. We specify the number of group we want to get as a parameter.

SELECT LastName, NTILE (4) OVER ( PARTITION BY SUBSTRING( LastName, 1, 1 ) ORDER BY LastName DESC ) AS Number
FROM
(
    SELECT DISTINCT LastName
    FROM Person.Contact
) t

Each partition would be divided into four groups. If the number of rows in a partition is not divisible by group number, this will cause groups of two sizes that differ by one member. Larger groups come before smaller groups in the order specified by the OVER clause.

Czytaj dalej | Dyskutuj
0

date and time

11 August, 2007-SQL

At last we’ve got date and time data types separately. In Sql Server 2008 there are several new types worth looking at. These are: time, date, datetime2 and datetimeoffset.

time

Represents a time of a day in 24-hour format. We can specify how many fractional seconds we want to store. Valid range is from 0 to 7, default is 7. Storage size depends on the scale used and is between 3 and 5 bytes. For example:

SELECT CAST(GETDATE() AS TIME(4));

date

Represents a date. The valid range was expanded and is now 0001-01-01 through 9999-12-31. Storage size is 3 bytes.

SELECT CAST(GETDATE() AS DATE);

datetime2

Represents a combination of date and time. It is a datetime successor with larger range and fractional seconds precision. Storage size is between 6 and 8 bytes.

SELECT CAST(GETDATE() AS DATETIME2(5));

datetimeoffset

Represents a data and time with time zone awareness. We specify time zone offset using [+|-] hh:mm, where hh is two digits range from 00 to 14. Storage size between 8 and 10 bytes. For instance

SELECT CAST('2007-08-11 13:08:11' AS DATETIMEOFFSET(5));

returns 2007-08-11 13:08:11.00000 +00:00.

SELECT DATEDIFF( HOUR, CAST('2007-08-11 13:08:11 +01:00' AS DATETIMEOFFSET(5)),
CAST('2007-08-11 13:08:11' AS DATETIME2(5)));

returns 1.

Czytaj dalej | Dyskutuj
0

Estimating Rows

10 August, 2007-SQL

Problem

Several months ago, I was thinking how to effectively support scrollbarscount the number of rows on huge lists. Although the former is impractical in practice, the latter could be valuable. For example, we could turn on grouping on our grids when the number of rows is small enough. Otherwise a user would know that she should narrow down her search criteria.

Solution

Actual count would be very expensive, so we should find a way to estimate that number. The best way to do this is to utilize Sql Server Estimated Plans. Enable estimated plan generation, submit the query and take the magic number from returned xml.

SET SHOWPLAN_XML ON;
GO

SELECT * FROM Production.Product WHERE name LIKE '%a%';

However there is one problem, SET SHOWPLAN_XML cannot be specified inside a stored procedure and must be the only statement in a batch. We could use Sql trace however it saves data to a file.
Other possibility is to make use of SQLCLR – several lines of code and everything is ready:

[Microsoft.SqlServer.Server.SqlProcedure]
public static void GetEstimatedRows(string sql,
    [param: SqlFacet(Precision = 19, Scale = 4)] out SqlDecimal estimatedRows)
{
    estimatedRows = -1;
 
    using (SqlConnection c = new SqlConnection("Context Connection=true"))
    {
        c.Open();
        SqlCommand cmd = new SqlCommand("set showplan_xml on;", c);
        cmd.ExecuteNonQuery();
 
        SqlCommand cmd2 = new SqlCommand(sql, c);
        using (XmlReader r = cmd2.ExecuteXmlReader())
        {
            while (r.Read())
            {
                if (r.Name == "StmtSimple")
                {
                    NumberFormatInfo provider = new NumberFormatInfo();
                    provider.NumberDecimalSeparator = ".";
                    estimatedRows = Convert.ToDecimal(r.GetAttribute("StatementEstRows"), provider);
                    break;
                }
            }
        }
    }
}

Now we’re able to estimate statement recordset size from within T-SQL:

DECLARE

@sql VARCHAR(MAX);
SET @sql = 'SELECT * FROM Production.Product WHERE name LIKE ''%a%''';
DECLARE @rows DECIMAL(19,4);
EXEC GetEstimatedRows @sql, @rows OUTPUT;
SELECT @rows;

Using the same tactic we can get arbitrary information out of the estimated plan. For example:

DECLARE @sql VARCHAR(MAX);

SET @sql = 'SELECT * FROM Production.Product WHERE name LIKE ''%a%''';
DECLARE @cost DECIMAL(19,7);
EXEC GetEstimatedPlanCost @sql, @cost OUTPUT;

SELECT @cost;
Czytaj dalej | Dyskutuj
1 2  

Kategorie

Architecture BFsharp Blog Business Framework C# CqRS DDD Debugging DSL EntityFramework Formula JavaScript Linq NHibernate SaaS Silverlight SQL Visual Studio WPF Wzorce

O mnie


RSS Feed

© macmichal.pl 2011 Wszystkie prawa zastrzeżone