12 Things Developers Will Love About Oracle Database 12c Release 2

 

by Chris Saxon-Oracle

It's Here: Oracle Database 12c Release 2 (12.2) Is available on Oracle Cloud.

With it comes a whole host of new features to help you write better, faster applications. Here's my rundown of the top 12 new features to help you when developing against Oracle Database.

Easier, Better, Faster JSON

JSON support was included in Oracle Database 12.1.0.2. This helped you work with JSON documents stored in clobs or varchar2s. These are fantastic. But storing raw JSON should be the exception, not the norm. Most of the time you should shred your JSON documents into relational tables. This leaves you with a problem though. Getting the data back out in JSON format. Trying to write your own JSON generator is hard. So in 12.2 we offer a whole host of options to help you get the job done.

JSON from SQL

12.2 provides four key functions to help you write SQL that returns data in JSON format:

  • JSON_object
  • JSON_objectagg
  • JSON_array
  • JSON_arrayagg

You use the JSON_object* functions to create series of  key-value pair documents. i.e. the output has curly braces {}. The  JSON_array* functions take a list of values and return it as an array  i.e. in square brackets [ ]. For each row in the input, the non-agg versions of these functions output a row. The agg versions combine multiple rows into a single document or array.

OK. So how do these work?  Let's look at an example.

Say you're using the classic employees and departments tables. For each department you want a JSON document that contains:

  • The department name
  • An array of its employees
  • Each element of this array should be its own document, listing the employee's name and their job title.

For example:

"department": "Accounting", 

"employees": [   

  {     

     "name": "Shelley,Higgins",

      "job": "Accounting Manager"

  },   

  {     

      "name": "William,Gietz",

      "job": "Public Accountant"

   } 

  ]

}

How do you create this using the new functions? Let's work from the inside out:

  • First you need a document for each employee. This has two attributes, name and job. Pass these into a JSON_object call.
  • Then you need to turn these into an array. So wrap the JSON_object in a JSON_arrayagg. Group by department to split out the employees for each one into a separate array.
  • Finally you have a document per department. So you need another JSON_object with department and employees attributes. The values for these are the department name and      the results of the JSON_arrayagg call in the previous step.

Put it all together and you get:

select json_object (

          'department' value d.department_name,

           'employees' value json_arrayagg (

              json_object (

                'name' value first_name || ',' ||last_name,

                'job' value job_title )))

from hr.departments d, hr.employees e, hr.jobs j

where d.department_id = e.department_id

and e.job_id = j.job_id

group by d.department_name;

And voila! You have your JSON!

JSON in PL/SQL

So now you have your JSON document. But what if you want to edit it? Say you want to change the names to uppercase. And add a title element. So the previous document becomes:

{

   "department": "Accounting",

   "employees": [

      {

         "name": "SHELLEY,HIGGINS",

         "job": "Accounting Manager",

         "title": ""

       },

       {

         "name": "WILLIAM,GIETZ",

         "job": "Public Accountant",

         "title": ""

        }

   

    ]

}

If you're generating the document it's easiest to add these  in the SQL! So this assumes you want to change a JSON document from an  external source. To help with this, there are new PL/SQL objects. These enable you to access, modify and add elements to a JSON document with get/put calls.

The key object types are:

  • json_element_t – a supertype for documents and arrays
  • json_document_t – for working with JSON documents
  • json_array_t – for working with JSON arrays

The first thing you need to do is create the JSON object. Do this by parsing the document:

doc := json_object_t.parse('

  {

    "department": "Accounting",

    "employees": [

        {

            "name": "Shelley,Higgins",

            "job": "Accounting Manager"

         },

         {

             "name": "William,Gietz",

            "job": "Public Accountant"

          }

         

       ]

      

     }

    

');

You can then access the employees array using get:

     emps := treat(doc.get('employees') as json_array_t) ;

The treat function casts the element to the appropriate type (JSON_array_t here). Once you have the array, you can loop through the employees. Put adds a new key if it's not present. Otherwise it overwrites the existing value.

for i in 0 .. emps.get_size - 1 loop

    emp := treat(emps.get(i) as json_object_t);

    emp.put('title', '');

    emp.put('name', upper(emp.get_String('name')));

  end loop;

The get functions return a reference to the original object. So if you get some JSON and modify it, the original document also changes! If you don't want this, clone the element when you get it. For example:

     emps := treat(doc.get('employees') as json_array_t).clone

So the complete PL/SQL block to transform the JSON is:

declare

  doc json_object_t;

  emps json_array_t;

  emp json_object_t;

begin

  doc := json_object_t.parse('{

   "department": "Accounting",

   "employees": [

     {

       "name": "Shelley,Higgins",

       "job": "Accounting Manager"

      },

      {

        "name": "William,Gietz",

        "job": "Public Accountant"

      }

    ]

   

}');

   emps := treat(doc.get('employees') as json_array_t) ;

  

   for i in 0 .. emps.get_size - 1 loop

  

     emp := treat(emps.get(i) as json_object_t);

     emp.put('title', '');

     emp.put('name', upper(emp.get_String('name')));

    

    end loop;

   

    dbms_output.put_line(doc.to_String);

    

end;

/

{

    "department": "Accounting",

    "employees": [

       {

         "name": "SHELLEY,HIGGINS",

         "job": "Accounting Manager",

         "title": ""

       },

       {

         "name": "WILLIAM,GIETZ",

         "job": "Public Accountant",

         "title": ""

       }

      

    ]

   

}

Now you can generate JSON from SQL and change it in PL/SQL you have powerful options to work with it. And there's a raft of other improvements to JSON functionality in 12.2. Other enhancements include:

  • JSON_exists function
  • Support for In-Memory, Partitioning and Materialized Views
  • Search indexes
  • GeoJSON
  • JSON Data Guide

If you're desperate to work with JSON, I recommend checking these out.

Loooooooooooooooong Names

Oracle Database handles cache invalidation for you. So as a developer you don't have to worry about this. But when it comes to naming things, we've made it harder than it ought to be.

Why? Take the following example:

alter table customer_addresses add constraint

  customer_addresses_customer_id_fk

  foreign key ( customer_id )

  references customers ( customer_id );

Looks like a standard foreign key creation, right? But there's a problem. Run it and you'll get:

  SQL Error: ORA-00972: identifier is too long 

Aaarrghh! The constraint name is just a tiny bit too long. Staying within the 30 byte limit can be tricky. Particularly if you have naming standards you have to follow. As a result, many people have asked for us to allow longer names. Starting in 12.2 we've increased this limit. The maximum is now 128 bytes. So now you can create objects like:

create table with_a_really_really_really_really_really_long_name (

  and_lots_and_lots_and_lots_and_lots_and_lots_of int,

  really_really_really_really_really_long_columns int

);

Remember: the limit is 128 bytes. Not characters. So if you’re using a multi-byte character set, you’ll find you can’t create:

create table tablééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééé (

  is_67_chars_but_130_bytes int

);

Robust Code using Constants for Data Type Lengths

Most applications have at least one piece of PL/SQL that selects from the data dictionary. For example:

begin

  select table_name

  into    tab

  from    user_tables

  where ...

Because the maximum length of a table name has been 30 bytes forever, some developers took to declaring the variable like so:

declare

    tab varchar2(30);

Because who needs more than 30 characters, right? But as we just saw, upgrade to 12.2 and the limit is now 128 bytes! So it's only a matter of time before people create tables with longer names. Eventually this code will fail with:

     ORA-06502: PL/SQL: numeric or value error: character string buffer too small

So what to do? It'd be nice if you could change the maximum length of a  varchar2 dynamically. So instead of combing through your PL/SQL,  changing varchar2 ( 30 ) -> varchar2 ( 128 ), you could increase the  size in a single place.  Fortunately, in Oracle Database 12c Release 2 you can!

The new release enables you to declare a variable length using a constant. So you could create a constants package:

create or replace package constants as

   tab_length constant pls_integer := 128;

end constants;

/

Then use this when declaring your variables:

declare  

  tab varchar2( constants.tab_length );

Now if we ever increase the length of names again, you only need to make one change: the constant's value! Note these aren't fully dynamic. The PL/SQL compiler has to know the value for the variable size at compile time. This means you can't base it on the results of a query. User-defined functions are also out. So to enable the variable to hold longer strings, you need to increase the value of constants.tab_length and recompile your code. Now you may be thinking: for something as common as object names, surely Oracle provides something stating their max length?  The good news is: we do.

In DBMS_standard you'll find a new constants. This includes ora_max_name_len. As the name suggests, this states the maximum length for object names. So you can change your table name variable declarations to:

declare

  tab varchar2( ora_max_name_len );

begin

The best part is you can make your code future proof now! By using conditional compilation you can change your data dictionary based variable declarations to:

declare

  $if DBMS_DB_VERSION.VER_LE_12_1 $then

    tab varchar2( 30 );

  $else

    tab varchar2( ora_max_name_len );

  $end

Then when you come to upgrade the tab variable will automatically have the larger limit. You may be thinking that all sounds like a lot of work. And you're right. You can also make your variables Oracle  compatible now with type anchoring:

declare

  tab user_tabes.table_name%type;

Whichever method you use, start preparing your code now. It may be a long time until you upgrade. But the more robust your code is, the easier it'll be for you to use the new features.

Variable declarations are one of the more obvious problems you'll meet with longer names. Let's look at a more subtle issue:

Listagg Improved on Overflow

The following query returns a comma-separated list of indexes for each table in your schema:

select table_name,

       listagg(index_name, ',') within group (order by index_name) inds

from   user_indexes

group  by table_name;

This is all very well and good. But there's a potential problem with it. Listagg ( ) returns a varchar2. This is limited to 4,000 bytes (32,767 if you’re using extended data types). So in 12.1 and 11.2, you needed 130 or more indexes on a table before you start running into issues. And if you have that many indexes on one table, you've got bigger problems than hitting this limit.

But this changes in 12.2. With longer names, you could hit this limit at just over 30 indexes on a table. While still a large number, this is plausible. Particularly in reporting databases and data warehouses. And you can be sure that someone, somewhere will start creating "self-documenting" indexes like:

create index

  reducing_the_monthly_invoice_run_

  from_four_hours_to_three_minutes_

  PROJ12345_make_everything_faster_

  csaxon_thanks_everyone_yeah_baby on ...

Create too many of these and your listagg query will throw frustrating ORA-01489 errors. To get around this is tricky. So in 12.2 we've added an overflow clause. To use it, place "on overflow truncate" after the separator:

select table_name,

         listagg(index_name, ','

            on overflow truncate

         ) within group (order by index_name) inds

  from   user_indexes

  group  by table_name;

With this in place, instead of an exception your output will now look something like:

     ...lots_and_lots_and_lots,of_indexes,...(42)    

The "…" at the end indicates that the output is larger than Oracle can return. The number in brackets how many characters Oracle trimmed from the results. So not only can you see there is more data, you get an indication of how much there is.

The full syntax of this is:

listagg (

    things, ','

    [ on overflow (truncate|error) ]

    [ text ] [ (with|without) count ]

  ) within group (order by cols)

Now you can explicitly say whether you want error or truncation semantics. There's a good chance you've already written code to handle the ORA-1489 errors. So to keep the behavior of your code the same, the default remains error.

The text and count clauses control what appears at the end of the string. If you want to replace "..." with "more", "extra" or a "click for more" hyperlink, just provide your new string.

select table_name,

         listagg(index_name, ',' on overflow truncate

            'click here'

         ) within group (order by index_name) inds

  from   user_indexes

  group  by table_name;

You can also remove the number of trimmed characters by specifying "without count".

Lightening Fast SQL with Real Time Materialized Views

Materialized views (MVs) can give amazing performance boost. Once you create one based on your query, Oracle can get the results direct from the MV instead of executing the statement itself. This can make SQL significantly faster. Especially when the query processes millions of rows but there are only a handful in the output.

There's just one problem.

The data in the MV has to be fresh. Otherwise Oracle won't do the rewrite. You could of course query the MV directly. But the data will still be old.

So you need to keep the materialized view up-to-date. The easiest way is to declare it as "fast refresh on commit".

But this is easier said than done. Doing this has a couple of issues:

So if you have complex SQL you may not be able to use query rewrite. And even if you can, on high transaction systems the refresh overhead may cripple your system. So instead of "fast refresh on commit", you make the MV "fast refresh on demand". And create a job to update it. Which runs every second. But no matter how frequently you run the job, there will always be times when the MV is stale. So query performance could switch between lightning fast and dog slow. A guaranteed way to upset your users!  So how do you overcome this? With real time materialized views. These give the best of both worlds. You can refresh your MV on demand. But still have it return up-to-date information.

To do this, create the MV with the clause:

     on query computation

For example:

create table t (x not null primary key, y not null) as

      select rownum x, mod(rownum, 10) y from dual connect by level <= 1000;

   

    create materialized view log on t with rowid (x, y) including new values;

   

    create materialized view mv

    refresh fast on demand

    enable on query computation

    enable query rewrite

    as

      select y , count(*) c1

      from t

      group by y;

With this, you can add more data to your table:

insert into t

insert into t

   select 1000+rownum, 1 from dual connect by level <= 100;

commit;

And Oracle can still use the MV to rewrite. Even though the MV is stale.

select /*+ rewrite */y , count(*) from t

group by y;

It does this by:

  • Querying the stale MV
  • Then applying the inserts, updates and deletes in the MV log to it

This can lead to some scary looking execution plans! The point to remember is Oracle is reading the materialized view log. Then applying the changes to the MV. So the longer you leave it between refreshes, the more data there will be. You'll need to test to find the sweet spot to balancing the refresh process and applying MV change logs on query rewrite. You can even get the up-to-date information when you query the MV directly.

To do so, add the fresh_mv hint:

select /*+ fresh_mv */* from mv;  

The really cool part? You can convert your existing MVs to real time with the following command:

alter materialized view mv enable on query computation; 

This makes MVs much easier to work with, opening up your querying tuning options.

Approximate Query Enhancements

If you do data analysis, you often need to answer questions such as:

- How many customers visited our website yesterday?

- How many different products did we sell last month?

- How many unique SQL statements did the database execute last week?

Often these questions are simply the starting point for further analysis. So you just want a quick estimate. Answering these questions normally needs a count distinct along the lines of:

  select count ( distinct customer_id ) from website_hits;

But these queries can take a long time to run. Waiting for the answer is frustrating. But it's worse if you're getting the figures for someone else. Like your boss. And they need the figures for a meeting. That starts in a minute. And your query takes at least ten minutes. Your boss can't wait that long.

In cases like this you just need an quick estimate. After all, your boss will round your figure to one or two significant digits anyway. So in 12.1.0.2 we introduced approx_count_distinct. This returns an estimate of how many different values there are in the target column. This is typically over 99% accurate and could be significantly faster than exact results.

But to take advantage of it, you need to change your code! This could be a time consuming task. Particularly because most of the time you'll want to be able to switch between exact and approximate results. So a simple find+replace is out. Instead you'll have to pass in a flag to toggle between modes. If you're a big user of distinct counts this could be a lot of work.

So in 12.2 we've introduced a new parameter, approx_for_count_distinct. Set this to true like so:

alter session set approx_for_count_distinct = true;  

Oracle implicitly converts all count distincts to the approximate version. While playing with this you may notice a couple of other new parameters:

  • approx_for_aggregation
  • approx_for_percentile

So what are these all about? Well in 12.2 we've created a new function, approx_percentile. This is the approximate version of the percentile_disc and percentile_cont functions. It's the same concept as approx_count_distinct, just applied to these functions.

The syntax for it is:

approx_percentile (

   [ deterministic ],

  [ ('ERROR_RATE' | 'CONFIDENCE') ]

) within group ( order by )

As you can see, this has a couple of extra clauses over approx_count_distinct.

Deterministic

This defines whether you get the same results each time you run it on the same data set. Non-deterministic is the default. Meaning you could get different answers each time.

Now you may be wondering, "But why would I ever want non-deterministic results?!". Well, a couple of reasons:

  • Non-deterministic results are faster.
  • You can only get deterministic results on numeric values.

So if you want the 10th percentile in a range of dates, you have to go non-deterministic. But is the time saving for non-deterministic results worth it? To find out I created a 16 million row table Exadata Express Cloud Service. Then compared the run time of the following exact, deterministic and non-deterministic percentiles:

select percentile_disc(0.1) within group (order by y)

from   super_massive;

select approx_percentile(0.1 deterministic) within group (order by y)

from   super_massive;

select approx_percentile(0.1) within group (order by y)

from   super_massive;

Averaging the time for three runs of each gave the following results:

CS-Blog-approx-percentile-run-comparison.png

The figures show the average run time in hundredths of a second.

Non-deterministic results are around 5x faster than deterministic. And nearly 15x faster than exact results. So if an estimate is all you need, you can save yourself a lot of time using approx_percentile. If you want to check this for yourself, here's the script I used.

ERROR_RATE and CONFIDENCE

If you're getting estimated figures, it does beg the question, just how accurate are the results? If it's 99.9999% that's almost certainly "good enough".  But what if they're only 98% accurate? Or 95%? At some point the error  is too large for you to rely on the estimate and you'll want to switch  back to exact calculations. But to do this you need to know what the error is.

To find this, pass ERROR_RATE or CONFIDENCE as the second  parameter. Then you'll get the accuracy figures instead of the function  result. Confidence is how certain we are that the answer is correct. Error rate gives the level of inaccuracy. Perfect for finding out how good the approximation is.

The stats geeks among you will know that median is a special case  of percentile. So there's also an approx_median function available. This  works in the same way as approx_percentile.

So how does these functions relate to the parameter approx_for_percentile? There are two percentile functions in Oracle, percentile_disc and percentile_cont. So you have options to convert either of these or both of them. And whether to do so in a deterministic manner or not. The values this takes are:

  • all deterministic
  • percentile_disc deterministic
  • percentile_cont deterministic
  • all
  • percentile_disc
  • percentile_cont
  • none

Verify Data Type Conversions

It’s one of those all too common problems. Validating a date is indeed a date. A prime cause of this is the terrible practice of storing dates as strings. One of the biggest issues it is enables people to store things that clearly aren't dates in "date" columns:

create table dodgy_dates (

  id             int,

  is_this_a_date varchar2(20)

);

insert into dodgy_dates

values (1, 'abc');

Along with a whole bunch of values that might be dates. If you supply the correct format mask:

insert into dodgy_dates

values (2, '20150101');

insert into dodgy_dates

values (3, '01-jan-2016');

insert into dodgy_dates

values (4, '01/01/2016');

Returning only the valid dates is tricky. If you try to convert everything using to_date(), you'll get exceptions:

select t.*

from   dodgy_dates t

where  to_date(is_this_a_date) < sysdate;

ORA-01858: a non-numeric character was found where a numeric was expected

or maybe:

ORA-01861: literal does not match format string  

or

ORA-01843: not a valid month  

You could get around this by writing your own is_date() function. Or, if you're really stupid brave, use a regular expression.

Either way, it's a lot of unnecessary work.

So to make your life easier, we've created a new function, validate_conversion. You pass this a value and a data type. Then Oracle will tell you whether it can do the conversion. If it can, it returns one. Otherwise you get zero.

To return the rows in the table that can be real dates, place this in your where clause:

select t.*

from   dodgy_dates t

where  validate_conversion(is_this_a_date as date) = 1;

ID         IS_THIS_A_DATE

---------- --------------------

3          01-jan-2016

There's no error. But where did rows 2 and 4 go? They're possible dates too. Validate_conversion only tests one date format at a time. By default this is your NLS_date_format

Each client can set their own format. So if you rely on this, you may get unexpected results. To avoid this, I strongly recommend you pass the format as a parameter. For example:

select t.*

from   dodgy_dates t

where  validate_conversion(is_this_a_date as date, 'yyyymmdd') = 1;

ID         IS_THIS_A_DATE

---------- --------------------

2          20150101

So to return all the possible dates, you'll need to call this multiple times:

select t.*

from   dodgy_dates t

where  validate_conversion(is_this_a_date as date, 'yyyymmdd') = 1 or

       validate_conversion(is_this_a_date as date, 'dd/mm/yyyy') = 1 or

       validate_conversion(is_this_a_date as date, 'dd-mon-yyyy') = 1;

ID         IS_THIS_A_DATE

---------- --------------------

2          20150101

3          01-jan-2016

4          01/01/2016

And this isn't just for dates. You can use Validate_conversion with any of the following data types:

  • binary_double
  • binary_float
  • date
  • interval day to second
  • interval year to month
  • number
  • timestamp
  • timestamp with time zone

If you want to convert strings to dates, you'll need similar logic in the select. This will test the expression against various format masks. If it matches, call to_date with the relevant mask:

case

  when validate_conversion(is_this_a_date as date, 'yyyymmdd') = 1

  then to_date(is_this_a_date, 'yyyymmdd')

  when validate_conversion(is_this_a_date as date, 'dd/mm/yyyy') = 1

  then to_date(is_this_a_date, 'dd/mm/yyyy')

  when validate_conversion(is_this_a_date as date, 'dd-mon-yyyy') = 1

  then to_date(is_this_a_date, 'dd-mon-yyyy')

end

But this is clunky. Fortunately, 12.2 has more functionality to support data type conversions:

Handle Casting Conversion Errors

From time-to-time you'll want to cast a value to a different data type. This can bring problems if your values are incompatible with desired the type.

You could overcome this with the validate_conversion function we discussed above. But there is another way. Cast now has a “default on conversion error” clause.

This specifies which value Oracle returns if it can’t convert the expression to the type you wanted. For example, say you're attempting to cast a varchar2 column to a date. But it happens to include the value "not a date". You'd get a nasty error:

select cast ( 'not a date' as date )

from   dual;

ORA-01858: a non-numeric character was found where a numeric was expected

With the new clause you can tell Oracle to return a "magic date" instead of throwing an exception. For example:

select cast (

         'not a date' as date

         default date'0001-01-01' on conversion error

       ) dt

from   dual;

DT

--------------------

01-JAN-0001 00:00:00

You can then add checks to your code for this magic value. Note that the default value has to match the data type you're converting to. So if you're casting to a date, you can't return a string:

select cast (

         '01012010' as date

         default 'not a date' on conversion error

       ) dt

from   dual;

ORA-01858: a non-numeric character was found where a numeric was expected

And, as with validate_conversion, cast uses your NLS settings for the default format. If you want to override these, pass the format as a parameter:

select cast (

         '01012010' as date

         default '01010001' on conversion error,

         'ddmmyyyy'

       ) dt

from   dual;

DT

--------------------

01-JAN-2010 00:00:00

At first glance it seems limited. After all, how often do you use cast? If you're like me, the answer is "rarely". But there's more to it than that. The conversion error clause also applies to other casting functions. Such as:

  • to_date()
  • to_number()
  • to_yminterval()
  • etc.

These are functions you use all the time. So you can write data type conversions like this:

select to_date(

         'not a date' default '01010001' on conversion error,

         'ddmmyyyy'

       ) dt

from   dual;

DT

--------------------

01-JAN-0001 00:00:00

Combine this with validate_conversion makes changing expressions to a new data type much easier.

Single Statement Table Partitioning

It’s a question that frequently comes up on Ask TOM: “How do I convert a non-partitioned table to a partitioned one?”

Before 12.2 this was a convoluted process. You had to create a partitioned copy of the table and transfer the data over. You could use DBMS_redefinition to do this online. But it was a headache and easy to get wrong.

In Oracle Database 12c Release 2 it's easy. All you need is a single alter table command:

create table t ( x int, y int, z int );

alter table t modify partition by range interval (100) (

  partition p1 values less than (100)

) online

And you’re done.

“But what about all the indexes?” You can convert them too. Just add an “update indexes” clause and state whether you want them to be local or global after the conversion.

If you really want, you can give your global indexes different partitioning scheme. While you can change from a non-partitioned table to partitioned, you can’t go back again. You also can’t change the partitioning scheme, e.g. to go from list to range. Try to do so and you’ll get:

ORA-14427: table does not support modification to a partitioned state DDL

But if you want to get really fancy, you can go direct from a normal table to one with subpartitions too!

alter table t modify partition by range interval (100)

  subpartition by hash (y) subpartitions 4 (

    partition p1 values less than (100)

) online;

And there's even more improvements to partitioning. Such as:

Automatic List Partitioning

List partitions are great when you have a column with a specific set of values you want to carve into separate partitions. Things like states, countries and currencies are all good examples.

Reference data like these change rarely. But they do change. For example, South Sudan came into being in 2011.

CS-Blog-640px-SouthSudanStates.svg.png

If you list partitioned your data by country you need to keep your partitions up-to-date. Particularly if you let customers provide their own values. Or you could end up with embarrassing errors such as:

SQL Error: ORA-14400: inserted partition key does not map to any partition

Which of course will happen at 2am. To avoid this you could create a default partition. Any new values would then go into this. This prevented inserts throwing exceptions. But all new values go into the default partition. Over time this would fill up with all the new values.

So you need a regular maintenance task to split values out as needed. 12.2 resolves this with Automatic List Partitioning. Every time you insert new values, Oracle will create the new partition on the fly. To use it, simply place the automatic keyword after the partition column:

create table orders (

  customer_id      integer not null,

  order_datetime   date not null,

  country_iso_code varchar2(2) not null

) partition by list (country_iso_code) automatic (

  partition pUS values ('US'),

  partition pGB values ('GB'),

  partition pDE values ('DE'),

  partition pFR values ('FR'),

  partition pIT values ('IT')

);

insert into orders values (1, sysdate, 'ZA');

select partition_name

from   user_tab_partitions

where  table_name = 'ORDERS';

PARTITION_NAME 

--------------

PDE            

PFR            

PGB            

PIT            

PUS            

SYS_P1386

Note the new partition will have a system-generated name. So you may want to change them to meaningful names. You can do this with:

alter table orders rename partition SYS_P1386 to pZA;

Be aware that the default partition and automatic list partitioning are mutually exclusive options:

create table orders (

  customer_id integer not null,

  order_datetime date not null,

  country_iso_code varchar2(2) not null

) partition by list (country_iso_code) automatic (

  partition pUS values ('US'),

  partition pGB values ('GB'),

  partition pDE values ('DE'),

  partition pFR values ('FR'),

  partition pIT values ('IT'),

  partition pDEF values (default)

);

SQL Error: ORA-14851: DEFAULT [sub]partition cannot be specified for AUTOLIST [sub]partitioned objects.

Which makes sense when you think about it. But if you want to migrate list partitions with a default to automatic, you'll need to go through a process. First split everything out of the default partition, then drop it:

create table orders (

  customer_id integer not null,

  order_datetime date not null,

  country_iso_code varchar2(2) not null

) partition by list (country_iso_code) (

  partition pUS values ('US'),

  partition pGB values ('GB'),

  partition pDE values ('DE'),

  partition pFR values ('FR'),

  partition pIT values ('IT'),

  partition pDEF values (default)

);

insert into orders values (1, sysdate, 'ZA');

insert into orders values (2, sysdate, 'JP');

alter table orders split partition pDEF into (

  partition pZA values ('ZA'),

  partition pJP values ('JP'),

  partition pDEF

);

alter table orders drop partition pDEF;

alter table orders set partitioning automatic;

Note this does leave a brief time when there's no default partition. And automatic partitioning isn't ready. So you may want to take a short outage to do this.

Mark Old Code as "Not For Use"

Times change. New code quickly becomes legacy code. And legacy code is often superseded by better, faster code. So you deprecate the old code.

But this brings a problem:

How do you stop people using the legacy modules? People tend to stick with what they know. Even after you've repeatedly told everyone to move to the new module there's always (at least) one developer who insists on using the deprecated procedure instead of the newer, shinier option. And in complex applications it's tough to keep track of what's obsolete.

This is tough to solve. So to help you with the deprecation process, we've introduced a new pragma for this. To use it, place...

pragma deprecate ( deprecated_thing, 'Message to other developers' );

...below the retired section. How does it help? Oracle has added several new PL/SQL warnings: PLW-6019 to PLW-6022. Enable these and Oracle will tell you if you're using deprecated code:

alter session set plsql_warnings = 'enable:(6019,6020,6021,6022)';

create or replace procedure your_old_code is

  pragma deprecate (

    your_old_code, 'This is deprecated. Use new_code instead!'

  );

begin

  null;

end your_old_code;

/

show err

Warning(2,3): PLW-06019: entity YOUR_OLD_CODE is deprecated

This is great. But we've all been ignoring the "AUTHID DEFINER" warning forever. If code is truly obsolete, it would be good if you could stop people using it all together.

Fortunately you can. Here's the great thing about warnings. You can upgrade them to be errors. PLW-6020 is thrown when you write code calling a deprecated item. Set this to error and the offending code won't compile:

alter session set plsql_warnings = 'error:6020';

create or replace procedure calling_old_code is

begin

  your_old_code();

end calling_old_code;

/

sho err

3/3 PLS-06020: reference to a deprecated entity: This is deprecated. Use new_code instead!

Of course, if you turn PLW-6020 into an error system wide, a lot of stuff might break! So you can selectively upgrade it on given objects:

alter procedure calling_old_code compile plsql_warnings = 'error:6020' reuse settings;

So now you have the power to force others to stop using pre-historic code.

PL/SQL Code Coverage

We've covered a lot of new functionality. When you upgrade to 12.2 you'll want to test all your code to ensure it works as expected. Which brings the question: "How much of my code did the tests actually run?" Coverage metrics will help immensely with this.

Simple line level analysis of the tests isn't good enough. To see why, consider the following code. We have a basic function that returns its argument and calls dbms_output.

The procedure calls the function twice in a single if statement:

create or replace function f (p int)

  return int as

begin

  dbms_output.put_line('Executed: ' || p);

  return p;

end;

/

create or replace procedure p is

begin

  if f(1) = 1 or f(2) = 2 then

    dbms_output.put_line('this');

  else

    dbms_output.put_line('that');

  end if;

end p;

/

Due to short-circuit evaluation, f(2) is never executed. You can see this from the output:

SQL> exec p;

Executed: 1

this

Anything working at the line level will incorrectly report this as fully covered. To overcome this, you need to details of "basic block" executions. So what is a "basic block"?

It's a piece of code that you either runs completely or not at all. Code always belongs to exactly one basic block. For example:

if f(1) = 1 or f(2) = 2 then

    dbms_output.put_line('this');

  else

    dbms_output.put_line('that');

  end if;

has four basic blocks. One for each call to f and two for the calls to dbms_output.put_line. The new code coverage functionality measures and reports on these basic blocks.

Using it is easy. First you need to create coverage tables to store the metrics:

exec dbms_plsql_code_coverage.create_coverage_tables;

Then call start_coverage before your test and stop_coverage after:

declare

  run_id pls_integer;

begin

  run_id := dbms_plsql_code_coverage.start_coverage('TEST');

  p;

  dbms_plsql_code_coverage.stop_coverage;

end;

/

You can then get metrics by querying the dbmspcc* tables that hold these details:

select owner, name, type,

       round( ( sum(covered)/count(*) * 100), 2) pct_covered

from   dbmspcc_runs r

join   dbmspcc_units u

on     r.run_id = u.run_id

join   dbmspcc_blocks b

on     r.run_id = b.run_id

and    u.object_id = b.object_id

where  r.run_comment = 'TEST'

group  by owner, name, type;

OWNER  NAME  TYPE       PCT_COVERED 

-----  ----- ---------- -----------

CHRIS  P     PROCEDURE  50          

CHRIS  F     FUNCTION   100

This is all well and good. But there's always some code which your tests don't cover. Maybe it's deprecated, so you don't need test it. Or it's "just-in-case" code to cover theoretically possible but practically impossible cases. Such as the infamous "when others" exception handler. You want to exclude these sections from your reports. Fortunately you can with the coverage pragma. By marking lines as "NOT_FEASIBLE" you can filter these out of your reports:

create or replace procedure p is

begin

   

  if f(1) = 1 or f(2) = 2 then

    dbms_output.put_line('this');

  else

    pragma coverage ('NOT_FEASIBLE');

    dbms_output.put_line('that');

  end if;

   

end p;

/

Rerun the tests and you can hide the untestable parts in your report.

select owner, name, type,

       round( ( sum(covered)/count(*) * 100), 2) pct_covered

from   dbmspcc_runs r

join   dbmspcc_units u

on     r.run_id = u.run_id

join   dbmspcc_blocks b

on     r.run_id = b.run_id

and    u.object_id = b.object_id

where  r.run_comment = 'TEST'

and    b.not_feasible = 0

group  by owner, name, type;

OWNER  NAME  TYPE       PCT_COVERAGE 

-----  ----  ---------  ------------

CHRIS  P     PROCEDURE  66.67       

CHRIS  F     FUNCTION   100

If you really want, you can exclude whole sections of code by wrapping it in two coverage pragmas. The first starting NOT_FEASIBLE_START, the second NOT_FEASIBLE_END:

begin

  pragma coverage ('NOT_FEASIBLE_START');

  a_section();

  of_untestable_code();

  pragma coverage ('NOT_FEASIBLE_END');

end;

/

 

Additional Resources

New Features in Oracle Database 12c Release 2 (Full List)

Try Oracle Database 12c Release 2 (on Oracle cloud)

Oracle Database 12cR2 Developer Features Quick Reference (infographic)

原文地址:https://www.cnblogs.com/DataArt/p/10013994.html