SqlCommand.Prepare Method

SqlCommand.Prepare Method

Remarks

If CommandType is set to StoredProcedure, the call to Prepare should succeed, although it may cause a no-op.

Before you call Prepare, specify the data type of each parameter in the statement to be prepared. For each parameter that has a variable length data type, you must set the Size property to the maximum size needed. Prepare returns an error if these conditions are not met.

Note

If the database context is changed by executing the Transact-SQL USE <database> statement, or by calling the ChangeDatabase method, then Prepare must be called a second time.

If you call an Execute method after calling Prepare, any parameter value that is larger than the value specified by the Size property is automatically truncated to the original specified size of the parameter, and no truncation errors are returned.

Output parameters (whether prepared or not) must have a user-specified data type. If you specify a variable length data type, you must also specify the maximum Size.

Prior to Visual Studio 2010, Prepare threw an exception. Beginning in Visual Studio 2010, this method does not throw an exception.

 private static void SqlCommandPrepareEx(string connectionString)
        {
            try
            {
                DataTable dataTable = new DataTable();
                using (SqlConnection connection = new SqlConnection(connectionString))
                {
                    connection.Open();
                    SqlCommand command = new SqlCommand(null, connection);

                    // Create and prepare an SQL statement.
                    command.CommandText =
                        @"SELECT *
                FROM dbo.tbm_mem_Member_Employee
                WHERE PreferredEmail = @email";
                    SqlParameter idParam = new SqlParameter("@email", SqlDbType.VarChar,4000);

                    idParam.Value = "Test2259.TFL@rdc.com";
                    //command.Parameters.Add("@email", SqlDbType.VarChar).Value = "Test2259.TFL@rdc.com";
                    command.Parameters.Add(idParam);

                    // Call Prepare after setting the Commandtext and Parameters.
                    command.Prepare();
                    SqlDataAdapter da = new SqlDataAdapter(command);
                    // this will query your database and return the result to your datatable
                    da.Fill(dataTable);
                    connection.Close();
                    da.Dispose();
                }

                Console.WriteLine(dataTable.Rows.Count);
            }
            catch (Exception ex)
            {
                Console.WriteLine(ex);
            }
        }

        [Test]

        public void SQlPrepareTest()
        {
            string str = "database=UK_Connect_6;server=172.31.211.120\sql2017;UID=clu;PWD=bqC@zwLuyul@1aaH;Current Language=English;Connection Timeout=120;";
            SqlCommandPrepareEx(str);
        }

Prepared Statements in SQL-Server through C#

SQL Server (at least, via SqlClient) uses named parameters. That code will indeed execute a parameterised query, but a few notes:

  • it hasn't been formally "prepared" (see .Prepare()), but you pretty much never need to anyway
  • several of those objects are IDisposable; you should have usings for them
  • DataTable (and adapter, etc) will work, but is in decline (with mapped classes being preferred, IMO)
  • seeing a DataGridView and a SqlCommand in the same method probably means your UI code is too close to the data access code; I would push the data-access stuff down a level, personally

For example:

DataTable tbl = new DataTable();
using(var conn = new SqlConnection(@"Data Source=..."))
using(var com = conn.CreateCommand())
{
    com.CommandText = "select * from tbl1 where id<@id";
    com.Parameters.AddWithValue("@id",4);
    com.CommandType = CommandType.Text;        

    SqlDataAdapter dap = new SqlDataAdapter();   
    dap.SelectCommand = com;
    conn.Open();
    dap.Fill(tbl);
    conn.Close();     
}
return tbl;

(and bind it to the DataGridView back at the UI)

Of course, if the parameter value is always 4 you could code that into the TSQL directly.

What is the sense and benefit of using SqlCommand.Prepare()?

I came across developer code where SqlCommand.Prepare() (see MSDN) method is extensively used in advance of execution of SQL queries. And I wonder what is the benefit of this?

Sample:

command.Prepare();
command.ExecuteNonQuery();
//...
command.Parameters[0].Value = 20;
command.ExecuteNonQuery();

I have played around a little bit and traced. The execution of the Command after calling the Prepare() method makes Sql Server execute the following statement:

declare @p1 int
set @p1=1
exec sp_prepexec @p1 output,N'@id int,@desc text',N'INSERT INTO dbo.testtable (id) VALUES (@id)',@id=20'
select @p1

After that when the Parameter gets it's value and SqlCommand.ExecuteNonQuery() is called, the following gets executed on Sql-Server:

exec sp_execute 1,@id=20

To me this looks like the statement gets kind of compiled as soon Prepare() is executed. I wonder what is the benefit of this? Does this mean that it is put into the plan cache and can be re-used as soon the final query is executed with the desired parameter values?

I figured out (and documented it in another question) that SqlCommands that are executed with SqlParameters always are wrapped in sp_executesql procedure calls. This makes Sql Server able to store and reuse the plans independant of the parameter values.

Regarding this I wonder if the prepare() method is kind of useless or obsolete or if I am missing something here?

回答:

Preparing a SQL batch separately from executing the prepared SQL batch is a construct that is effectively** useless for SQL Server given how execution plans are cached. Separating out the steps of preparation (parsing, binding any parameters, and compiling) and execution only makes sense when there is no caching. The purpose is to save the time spent on parsing and compiling by reusing an existing plan, and this is what the internal plan cache does. But not all RDBMS's do this level of caching (clearly from the existence of these functions) and so it is left up to the client code to request that a plan be "cached", and so save that cache ID, and then re-use it. This is additional burden on the client code (and programmer to remember to do it and get it right) that is unnecessary. In fact, the MSDN page for the IDbCommand.Prepare() method states:

The server automatically caches plans for reuse as necessary; therefore, there is no need to call this method directly in your client application.

It should be noted that, as far as my testing shows (which matches what you show in the Question), calling SqlCommand.Prepare(), does not do a "prepare"-only operation: it calls sp_prepexec which prepares and executes the SQL; it does not call sp_prepare which is parse and compile only (and does not take in any parameter values, only their names and datatypes). Hence, there cannot be any "pre-compile" benefit of calling SqlCommand.Prepare since it does an immediate execution (assuming the goal is to defer execution). HOWEVER, there's an interesting potential minor benefit of calling SqlCommand.Prepare(), even if it does call sp_prepexec instead of sp_prepare. Please see the note (**) at the bottom for details.

My testing shows that even when SqlCommand.Prepare() is called (and please note that this call does not issue any commands on SQL Server), the ExecuteNonQuery or ExecuteReader that follows is fully executed, and if it is ExecuteReader, it returns rows. Still, SQL Server Profiler does show that the first SqlCommand.Execute______() call after the SqlCommand.Prepare() call registers as a "Prepare SQL" event, and subsequent .Execute___() calls register as "Exec Prepared SQL" events.


Test Code

It has been uploaded to PasteBin at: http://pastebin.com/Yc2Tfvup. The code creates a .NET / C# Console App that is intended to run while a SQL Server Profiler trace is running (or an Extended Events session). It pauses after each step so it will be clear which statements have a particular effect.


UPDATE

Found more info, and a slight potential reason to avoid calling SqlCommand.Prepare(). One thing I noticed in my testing, and what should be noted for anyone running that test Console App: there is never an explicit call made sp_unprepare. I did some digging and found the following post in the MSDN forums:

SqlCommand - Prepare or not to prepare?

The accepted answer contains a bunch of info, but the salient points are:

  • ** What prepare actually saves you is primarily the time it takes to transmit the query string over the wire.
  • A prepared query has no guarantee of a cached plan being saved, and is no faster than an ad-hoc query once it reaches the server.
  • RPC's (CommandType.StoredProcedure) gain nothing from preparing.
  • On the server, a prepared handle is basically an index into an in-memory map containing TSQL to execute.
  • The map is stored on a per-connection basis, no cross-connection use is possible.
  • The reset sent when the client re-uses the connection from the pool clears the map.

I also found the following post from the SQLCAT team, which appears to be related, but could simply be an issue specific to ODBC whereas SqlClient does clean up correctly upon disposing of the SqlConnection. It is hard to say since the SQLCAT post does not mention any additional tests that would help prove this as a cause, such as clearing the connection pool, etc.

Watch out those prepared SQL statements


CONCLUSION

Given that:

  • the only real benefit of calling SqlCommand.Prepare() seems to be that you don't need to submit the query text again over the network,
  • calling sp_prepare and sp_prepexec store the query text as part of the connection's memory (i.e. SQL Server memory)

I would recommend against calling SqlCommand.Prepare() because the only potential benefit is saving network packets while the downside is taking up more server memory. While the amount of memory consumed is probably very small in the majority of cases, network bandwidth is rarely ever an issue since most DB servers are directly connected to the app servers over 10 Megabit minimum (more likely 100 Megabit or Gigabit these days) connections (and some are even on the same box ;-). That and memory is nearly always a more scarce resource than network bandwidth.

I suppose, for anyone writing software that can connect to multiple databases, then the convenience of a standard interface might change this cost/benefit analysis. But even in that case, I have to believe that it is still easy enough to abstract a "standard" DB interface that allows for different providers (and hence differences between them, such as not needing to call Prepare() ) given that doing so is basic Object Oriented programming that you are likely already doing in your app code ;-).

原文地址:https://www.cnblogs.com/chucklu/p/14388670.html