Feed aggregator

Oracle 18c – select from a flat file

Yann Neuhaus - 1 hour 18 min ago
By Franck Pachot

.
This post is the first one from a series of small examples on recent Oracle features. My goal is to present them to people outside of Oracle and relational databases usage, maybe some NoSQL players. And this is why the title is “select from a flat-file” rather than “Inline External Tables”. In my opinion, the names of the features of Oracle Database are invented by the architects and developers, sometimes renamed by Marketing or CTO, and all that is very far from what the users are looking for. In order to understand “Inline External Table” you need to know all the history behind: there were tables, then external tables, and there were queries, and inlined queries, and… But imagine a junior who just wants to query a file, he will never find this feature. He has a file, it is not a table, it is not external, and it is not inline. What is external to him is this SQL language and what we want to show him is that this language can query his file.

I’m running this in the Oracle 20c preview in the Oracle Cloud.

In this post, my goal is to load a small fact and dimension table for the next posts about some recent features that are interesting in data warehouses. It is the occasion to show that with Oracle we can easily select from a .csv file, without the need to run SQL*Loader or create an external table.
I’m running everything from SQLcl and then I use the host command to call curl:


host curl -L http://opendata.ecdc.europa.eu/covid19/casedistribution/csv/ | dos2unix | sort -r > /tmp/covid-19.csv

This gets the latest number of COVID-19 cases per day and per country.

It looks like this:


SQL> host head  /tmp/covid-19.csv
dateRep,day,month,year,cases,deaths,countriesAndTerritories,geoId,countryterritoryCode,popData2018,continentExp
31/12/2019,31,12,2019,27,0,China,CN,CHN,1392730000,Asia
31/12/2019,31,12,2019,0,0,Vietnam,VN,VNM,95540395,Asia
31/12/2019,31,12,2019,0,0,United_States_of_America,US,USA,327167434,America
31/12/2019,31,12,2019,0,0,United_Kingdom,UK,GBR,66488991,Europe
31/12/2019,31,12,2019,0,0,United_Arab_Emirates,AE,ARE,9630959,Asia
31/12/2019,31,12,2019,0,0,Thailand,TH,THA,69428524,Asia
31/12/2019,31,12,2019,0,0,Taiwan,TW,TWN,23780452,Asia
31/12/2019,31,12,2019,0,0,Switzerland,CH,CHE,8516543,Europe
31/12/2019,31,12,2019,0,0,Sweden,SE,SWE,10183175,Europe

I sorted them on date on purpose (next posts may talk about data clustering).

I need a directory object to access the file:


SQL> create or replace directory "/tmp" as '/tmp';

Directory created.

You don’t have to use quoted identifiers if you don’t like it. I find it convenient here.

I can directly select from the file, the EXTERNAL clause mentioning what we had to put in an external table before 18c:


SQL> select *
   from external (
    (
     dateRep                    varchar2(10)
     ,day                       number
     ,month                     number
     ,year                      number
     ,cases                     number
     ,deaths                    number
     ,countriesAndTerritories   varchar2(60)
     ,geoId                     varchar2(30)
     ,countryterritoryCode      varchar2(3)
     ,popData2018               number
     ,continentExp              varchar2(30)
    )
    default directory "/tmp"
    access parameters (
     records delimited by newline skip 1 -- skip header
     logfile 'covid-19.log'
     badfile 'covid-19.bad'
     fields terminated by "," optionally enclosed by '"'
    )
    location ('covid-19.csv')
    reject limit 0
   )
 .

SQL> /
      DATEREP    DAY    MONTH    YEAR    CASES    DEATHS                       COUNTRIESANDTERRITORIES       GEOID    COUNTRYTERRITORYCODE    POPDATA2018    CONTINENTEXP
_____________ ______ ________ _______ ________ _________ _____________________________________________ ___________ _______________________ ______________ _______________
01/01/2020         1        1    2020        0         0 Algeria                                       DZ          DZA                           42228429 Africa
01/01/2020         1        1    2020        0         0 Armenia                                       AM          ARM                            2951776 Europe
01/01/2020         1        1    2020        0         0 Australia                                     AU          AUS                           24992369 Oceania
01/01/2020         1        1    2020        0         0 Austria                                       AT          AUT                            8847037 Europe
01/01/2020         1        1    2020        0         0 Azerbaijan                                    AZ          AZE                            9942334 Europe
01/01/2020         1        1    2020        0         0 Bahrain                                       BH          BHR                            1569439 Asia
ORA-01013: user requested cancel of current operation

SQL>

I cancelled it as that’s too long to display here.

As the query is still in the buffer, I just add a CREATE TABLE in front of it:


SQL> 1
  1* select *
SQL> c/select/create table covid as select/
   create table covid as select *
  2   from external (
  3    (
  4     dateRep                    varchar2(10)
  5     ,day                       number
...

SQL> /

Table created.

SQL>

While I’m there I’ll quickly create a fact table and a dimension hierarchy:


SQL> create table continents as select rownum continent_id, continentexp continent_name from (select distinct continentexp from covid where continentexp!='Other');

Table created.

SQL> create table countries as select country_id,country_code,country_name,continent_id from (select distinct geoid country_id,countryterritorycode country_code,countriesandterritories country_name,continentexp continent_name from covid where continentexp!='Other') left join continents using(continent_name);

Table created.

SQL> create table cases as select daterep, geoid country_id,cases from covid where continentexp!='Other';

Table created.

SQL> alter table continents add primary key (continent_id);

Table altered.

SQL> alter table countries add foreign key (continent_id) references continents;

Table altered.

SQL> alter table countries add primary key (country_id);

Table altered.

SQL> alter table cases add foreign key (country_id) references countries;

Table altered.

SQL> alter table cases add primary key (country_id,daterep);

Table altered.

SQL>

This create a CASES fact table with only one measure (covid-19 cases) and two dimensions. To get it simple, the date dimension here is just a date column (you usually have a foreign key to a calendar dimension). The geographical dimension is a foreign key to the COUNTRIES table which itself has a foreign key referencing the CONTINENTS table.

12c Top-N queries

In 12c we have a nice syntax for Top-N queries with the FETCH FIRST clause of the ORDER BY:


SQL> select continent_name,country_code,max(cases) from cases join countries using(country_id) join continents using(continent_id) group by continent_name,country_code order by max(cases) desc fetch first 10 rows only;

CONTINENT_NAME                 COU MAX(CASES)
------------------------------ --- ----------
America                        USA      48529
America                        BRA      33274
Europe                         RUS      17898
Asia                           CHN      15141
America                        ECU      11536
Asia                           IND       9304
Europe                         ESP       9181
America                        PER       8875
Europe                         GBR       8719
Europe                         FRA       7578

10 rows selected.

This returns the 10 countries which had the maximum covid-19 cases per day.

20c WINDOW clauses

If I want to show the date with the maximum value, I can use analytic functions and in 20c I don’t have to repeat the window several times:


SQL> select continent_name,country_code,top_date,top_cases from (
  2   select continent_name,country_code,daterep,cases
  3    ,first_value(daterep)over(w) top_date
  4    ,first_value(cases)over(w) top_cases
  5    ,row_number()over(w) r
  6    from cases join countries using(country_id) join continents using(continent_id)
  7    window w as (partition by continent_id order by cases desc)
  8   )
  9   where r=1 -- this to get the rows with the highes value only
 10   order by top_cases desc fetch first 10 rows only;

CONTINENT_NAME                 COU TOP_DATE    TOP_CASES
------------------------------ --- ---------- ----------
America                        USA 26/04/2020      48529
Europe                         RUS 02/06/2020      17898
Asia                           CHN 13/02/2020      15141
Africa                         ZAF 30/05/2020       1837
Oceania                        AUS 23/03/2020        611

The same can be done before 20c but you have to write the (partition by continent_id order by cases desc) for each projection.

In the next post I’ll show a very nice feature. Keeping the 3 tables normalized data model but, because storage is cheap, materializing some pre-computed joins. If you are a fan of NoSQL because “storage is cheap” and “joins are expensive”, then you will see what we can do with SQL in this area…

Cet article Oracle 18c – select from a flat file est apparu en premier sur Blog dbi services.

Oracle 12c – pre-built join index

Yann Neuhaus - 1 hour 18 min ago
By Franck Pachot

.
This post is part of a series of small examples of recent features. I’m running this in the Oracle 20c preview in the Oracle Cloud. I have created a few tables in the previous post with a mini-snowflake scheme: a fact table CASES with the covid-19 cases per country and day. And a dimension hierarchy for the country with COUNTRIES and CONTINENTS tables.

This title may look strange for people used to Oracle. I am showing the REFRESH FAST ON STATEMENT Materialized View clause here, also known as “Synchronous Refresh for Materialized Views”. This name makes sense only when you already know materialized views, complete and fast refreshes, on commit and on-demand refreshes… But that’s not what people will look for. Indexes are also refreshed by the statements, synchronously. Imagine that they were called “Synchronous Refresh for B*Trees”, do you think they would have been so popular?

A materialized view, like an index, is a redundant structure where data is stored in a different physical layout in order to be optimal for alternative queries. For example, you ingest data per date (which is the case in my covid-19 table – each day a new row with the covid-19 cases per country). But if I want to query all points for a specific country, those are scattered though the physical segment that is behind the table (or the partition). With an index on the country_code, I can identify easily one country, because the index is sorted on the country. I may need to go to the table to get the rows, and that is expensive, but I can avoid it by adding all the attributes in the index. With Oracle, as with many databases, we can build covering indexes, for real index-only access, even if they don’t mention those names.

But with my snowflake schema, I’ll not have the country_code in the fact table and I have to join to a dimension. This is more expensive because the index on the country_name will get the country_id and then I have to go to an index on the fact table to get the rows for this country_id. When it comes to joins, I cannot index the result of the join (I’m skipping bitmap join indexes here because I’m talking about covering indexes). What I would like is an index with values from multiple tables.

A materialized view can achieve much more than an index. We can build the result of the join in one table. And no need for event sourcing or streaming here to keep it up to date. No need to denormalize and risk inconsistency. When NoSQL pioneers tell you that storage is cheap and redundancy is the way to scale, just keep your relational database for integrity and build materialized views on top. When they tell you that joins are expensive, just materialize them upfront. Before 12c, keeping those materialized views consistent with the source required either:

  1. materialized view logs which is similar to event sourcing except that ON COMMIT refresh is strongly consistent
  2. partition change tracking which is ok for bulk changes, when scaling big data

This is different from indexes which are maintained immediately: when you update the row, the index is synchronized because your session has the values and the rowid and can go directly to update the index entry.

refresh fast on statement

In 12c you have the benefit from both: index-like fast maintenance with rowid access, and the MView possibility of querying pre-build joins. Here is an example on the tables created in the previous post.


SQL> create materialized view flatview refresh fast on statement as
  2  select daterep,continent_name,country_name,cases from cases join countries using(country_id) join continents using(continent_id) where cases>0;

select daterep,continent_name,country_name,cases from cases join countries using(country_id) join continents using(continent_id) where cases>0
                                                                                                                                             *
ERROR at line 2:
ORA-12015: cannot create a fast refresh materialized view from a complex query

There are some limitations when we want fast refresh and we have a utility to help us understand what we have to change or add in our select clause.

explain_mview

I need to create the table where the messages will be written to by this utility:


@ ?/rdbms/admin/utlxmv

SQL> set sqlformat ansiconsole
SQL> set pagesize 10000

This has created mv_capabilities_table and I can run dbms_mview.explain_mview() now.

Here is the call, with the select part of the materialized view:


SQL> exec dbms_mview.explain_mview('-
  2  select daterep,continent_name,country_name,cases from cases join countries using(country_id) join continents using(continent_id) where cases>0-
  3  ');

PL/SQL procedure successfully completed.

SQL> select possible "?",capability_name,related_text,msgtxt from mv_capabilities_table where capability_name like 'REFRESH_FAST%' order by seq;

   ?                  CAPABILITY_NAME    RELATED_TEXT                                                                 MSGTXT
____ ________________________________ _______________ ______________________________________________________________________
N    REFRESH_FAST
N    REFRESH_FAST_AFTER_INSERT                        inline view or subquery in FROM list not supported for this type MV
N    REFRESH_FAST_AFTER_INSERT                        inline view or subquery in FROM list not supported for this type MV
N    REFRESH_FAST_AFTER_INSERT                        view or subquery in from list
N    REFRESH_FAST_AFTER_ONETAB_DML                    see the reason why REFRESH_FAST_AFTER_INSERT is disabled
N    REFRESH_FAST_AFTER_ANY_DML                       see the reason why REFRESH_FAST_AFTER_ONETAB_DML is disabled
N    REFRESH_FAST_PCT                                 PCT FAST REFRESH is not possible if query contains an inline view

SQL> rollback;

Rollback complete.

“inline view or subquery in FROM list not supported for this type MV” is actually very misleading. I use ANSI joins and they are translated to query blocks and this is not supported.

No ANSI joins

I rewrite it with the old join syntax:


SQL> exec dbms_mview.explain_mview('-
  2  select daterep,continent_name,country_name,cases from cases , countries , continents where cases.country_id=countries.country_id and countries.continent_id=continents.continent_id and cases>0-
  3  ');

PL/SQL procedure successfully completed.

SQL> select possible "?",capability_name,related_text,msgtxt from mv_capabilities_table where capability_name like 'REFRESH_FAST%' order by seq;
   ?                  CAPABILITY_NAME       RELATED_TEXT                                                                      MSGTXT
____ ________________________________ __________________ ___________________________________________________________________________
N    REFRESH_FAST
N    REFRESH_FAST_AFTER_INSERT        CONTINENTS         the SELECT list does not have the rowids of all the detail tables
N    REFRESH_FAST_AFTER_INSERT        DEMO.CASES         the detail table does not have a materialized view log
N    REFRESH_FAST_AFTER_INSERT        DEMO.COUNTRIES     the detail table does not have a materialized view log
N    REFRESH_FAST_AFTER_INSERT        DEMO.CONTINENTS    the detail table does not have a materialized view log
N    REFRESH_FAST_AFTER_ONETAB_DML                       see the reason why REFRESH_FAST_AFTER_INSERT is disabled
N    REFRESH_FAST_AFTER_ANY_DML                          see the reason why REFRESH_FAST_AFTER_ONETAB_DML is disabled
N    REFRESH_FAST_PCT                                    PCT is not possible on any of the detail tables in the materialized view

SQL> rollback;

Rollback complete.

Now I need to add the ROWID of the table CONTINENTS in the materialized view.

ROWID for all tables

Yes, as I mentioned, the gap between indexes and materialized views is shorter. The REFRESH FAST ON STATEMENT requires access by rowid to update the materialized view, like when a statement updates an index.


SQL> exec dbms_mview.explain_mview('-
  2  select continents.rowid continent_rowid,daterep,continent_name,country_name,cases from cases , countries , continents where cases.country_id=countries.country_id and countries.continent_id=continents.continent_id and cases>0-
  3  ');

PL/SQL procedure successfully completed.

SQL> select possible "?",capability_name,related_text,msgtxt from mv_capabilities_table where capability_name like 'REFRESH_FAST%' order by seq;
   ?                  CAPABILITY_NAME       RELATED_TEXT                                                                      MSGTXT
____ ________________________________ __________________ ___________________________________________________________________________
N    REFRESH_FAST
N    REFRESH_FAST_AFTER_INSERT        COUNTRIES          the SELECT list does not have the rowids of all the detail tables
N    REFRESH_FAST_AFTER_INSERT        DEMO.CASES         the detail table does not have a materialized view log
N    REFRESH_FAST_AFTER_INSERT        DEMO.COUNTRIES     the detail table does not have a materialized view log
N    REFRESH_FAST_AFTER_INSERT        DEMO.CONTINENTS    the detail table does not have a materialized view log
N    REFRESH_FAST_AFTER_ONETAB_DML                       see the reason why REFRESH_FAST_AFTER_INSERT is disabled
N    REFRESH_FAST_AFTER_ANY_DML                          see the reason why REFRESH_FAST_AFTER_ONETAB_DML is disabled
N    REFRESH_FAST_PCT                                    PCT is not possible on any of the detail tables in the materialized view
SQL> rollback;

Rollback complete.

Now, the ROWID for COUNTRIES.

I continue the and finally I’ve added ROWID for all tables involved:


SQL> exec dbms_mview.explain_mview('-
  2  select continents.rowid continent_rowid,daterep,continent_name,country_name,cases from cases , countries , continents where cases.country_id=countries.country_id and countries.continent_id=continents.continent_id and cases>0-
  3  ');

PL/SQL procedure successfully completed.

SQL> select possible "?",capability_name,related_text,msgtxt from mv_capabilities_table where capability_name like 'REFRESH_FAST%' order by seq;
   ?                  CAPABILITY_NAME       RELATED_TEXT                                                                      MSGTXT
____ ________________________________ __________________ ___________________________________________________________________________
N    REFRESH_FAST
N    REFRESH_FAST_AFTER_INSERT        COUNTRIES          the SELECT list does not have the rowids of all the detail tables
N    REFRESH_FAST_AFTER_INSERT        DEMO.CASES         the detail table does not have a materialized view log
N    REFRESH_FAST_AFTER_INSERT        DEMO.COUNTRIES     the detail table does not have a materialized view log
N    REFRESH_FAST_AFTER_INSERT        DEMO.CONTINENTS    the detail table does not have a materialized view log
N    REFRESH_FAST_AFTER_ONETAB_DML                       see the reason why REFRESH_FAST_AFTER_INSERT is disabled
N    REFRESH_FAST_AFTER_ANY_DML                          see the reason why REFRESH_FAST_AFTER_ONETAB_DML is disabled
N    REFRESH_FAST_PCT                                    PCT is not possible on any of the detail tables in the materialized view

SQL> rollback;

Rollback complete.

SQL> exec dbms_mview.explain_mview('-
  2  select cases.rowid case_rowid,countries.rowid country_rowid,continents.rowid continent_rowid,daterep,continent_name,country_name,cases from cases , countries , continents where cases.country_id=countries.country_id and countries.continent_id=continents.continent_id and cases>0-
  3  ');

PL/SQL procedure successfully completed.

SQL> select possible "?",capability_name,related_text,msgtxt from mv_capabilities_table where capability_name like 'REFRESH_FAST%' order by seq;
   ?                  CAPABILITY_NAME       RELATED_TEXT                                                                      MSGTXT
____ ________________________________ __________________ ___________________________________________________________________________
N    REFRESH_FAST
N    REFRESH_FAST_AFTER_INSERT        DEMO.CASES         the detail table does not have a materialized view log
N    REFRESH_FAST_AFTER_INSERT        DEMO.COUNTRIES     the detail table does not have a materialized view log
N    REFRESH_FAST_AFTER_INSERT        DEMO.CONTINENTS    the detail table does not have a materialized view log
N    REFRESH_FAST_AFTER_ONETAB_DML                       see the reason why REFRESH_FAST_AFTER_INSERT is disabled
N    REFRESH_FAST_AFTER_ANY_DML                          see the reason why REFRESH_FAST_AFTER_ONETAB_DML is disabled
N    REFRESH_FAST_PCT                                    PCT is not possible on any of the detail tables in the materialized view

SQL> rollback;

Rollback complete.

Ok, now another message: “the detail table does not have a materialized view log”. But that’s exactly the purpose of statement-level refresh: being able to fast refresh without creating and maintaining materialized view logs, and without full-refreshing a table or a partition.

This’t the limit of DBMS_MVIEW.EXPLAIN_MVIEW. Let’s try to create the materialized view now:


SQL> create materialized view flatview refresh fast on statement as
  2  select cases.rowid case_rowid,countries.rowid country_rowid,continents.rowid continent_rowid,daterep,continent_name,country_name,cases from cases , countries , continents where cases.country_id=countries.country_id and countries.continent_id=continents.continent_id and cases>0;

select cases.rowid case_rowid,countries.rowid country_rowid,continents.rowid continent_rowid,daterep,continent_name,country_name,cases from cases , countries , continents where cases.country_id=countries.country_id and countries.continent_id=continents.continent_id and cases>0
                                                                                                                                                                                                                                                                                    *
ERROR at line 2:
ORA-32428: on-statement materialized join view error: Shape of MV is not
supported(composite PK)

SQL>

That’s clear. I had created the fact primary key on the compound foreign keys.

Surrogate key on fact table

This is not allowed by statement-level refresh, so let’s change that:


SQL> alter table cases add (case_id number);

Table altered.

SQL> update cases set case_id=rownum;

21274 rows updated.

SQL> alter table cases drop primary key;

Table altered.

SQL> alter table cases add primary key(case_id);

Table altered.

SQL> alter table cases add unique(daterep,country_id);
Table altered.

I have added a surrogate key and defined a unique key for the composite one.

Now the creation is sucessful:


SQL> create materialized view flatview refresh fast on statement as
  2  select cases.rowid case_rowid,countries.rowid country_rowid,continents.rowid continent_rowid,daterep,continent_name,country_name,cases from cases , countries , continents where cases.country_id=countries.country_id and countries.continent_id=continents.continent_id and cases>0;

Materialized view created.

Note that I tested later and I am able to create it with the ROWID from the fact table CASES only. But that’s not a good idea: in order to propagate any change to the underlying tables, the materialized view must have the ROWID, like an index. I consider as a bug the possibility to do it.

Here are the columns stored in my materialized view:


SQL> desc flatview

              Name    Null?            Type
__________________ ________ _______________
CASE_ROWID                  ROWID
COUNTRY_ROWID               ROWID
CONTINENT_ROWID             ROWID
DATEREP                     VARCHAR2(10)
CONTINENT_NAME              VARCHAR2(30)
COUNTRY_NAME                VARCHAR2(60)
CASES                       NUMBER

Storing the ROWID is not something we should recommend as some maintenance operations may change the physical location of rows. You will need to complete refresh the materialized view after an online move for example.

No-join query

I’ll show query rewrite in another blog post. For the moment, I’ll query this materialized view directly.

Here is a query similar to the one in the previous post:


SQL> select continent_name,country_name,top_date,top_cases from (
  2   select continent_name,country_name,daterep,cases
  3    ,first_value(daterep)over(partition by continent_name order by cases desc) top_date
  4    ,first_value(cases)over(partition by continent_name order by cases desc)top_cases
  5    ,row_number()over(partition by continent_name order by cases desc) r
  6    from flatview
  7   )
  8   where r=1 order by top_cases
  9  ;

   CONTINENT_NAME                COUNTRY_NAME      TOP_DATE    TOP_CASES
_________________ ___________________________ _____________ ____________
Oceania           Australia                   23/03/2020             611
Africa            South_Africa                30/05/2020            1837
Asia              China                       13/02/2020           15141
Europe            Russia                      02/06/2020           17898
America           United_States_of_America    26/04/2020           48529

I have replaced the country_id and continent_id by their name as I didn’t put them in my materialized view. And I repeated the window function everywhere if you want to run the same in versions lower than 20c.

This materialized view is a table. I can partition it by hash to scatter the data. I can cluster on another column. I can add indexes. I have the full power of a SQL databases on it, without the need to join if you think that joins are slow. If you come from NoSQL you can see it like a DynamoDB global index. You can query it without joining, fetching all attributes with one call, and filtering on another key than the primary key. But here we have always strong consistency: the changes are replicated immediately, fully ACID. They will be committed or rolled back by the same transaction that did the change. They will be replicated synchronously or asynchronously with read-only replicas.

DML on base tables

Let’s do some changes here, lowering the covid-19 cases of CHN to 42%:


SQL> alter session set sql_trace=true;

Session altered.

SQL> update cases set cases=cases*0.42 where country_id=(select country_id from countries where country_code='CHN');

157 rows updated.

SQL> alter session set sql_trace=false;

Session altered.

I have set sql_trace because I want to have a look at the magic behind it.

Now running my query on the materialized view:



SQL> select continent_name,country_name,top_date,top_cases from (
  2   select continent_name,country_name,daterep,cases
  3    ,first_value(daterep)over(partition by continent_name order by cases desc) top_date
  4    ,first_value(cases)over(partition by continent_name order by cases desc)top_cases
  5    ,row_number()over(partition by continent_name order by cases desc) r
  6    from flatview
  7   )
  8*  where r=1 order by top_cases;

   CONTINENT_NAME                COUNTRY_NAME      TOP_DATE    TOP_CASES
_________________ ___________________________ _____________ ____________
Oceania           Australia                   23/03/2020             611
Africa            South_Africa                30/05/2020            1837
Asia              India                       04/06/2020            9304
Europe            Russia                      02/06/2020           17898
America           United_States_of_America    26/04/2020           48529

CHN is not the top one in Asia anymore with the 42% correction.

The changes were immediately propagated to the materialized view like when indexes are updated, and we can see that in the trace:


SQL> column value new_value tracefile
SQL> select value from v$diag_info where name='Default Trace File';
                                                                     VALUE
__________________________________________________________________________
/u01/app/oracle/diag/rdbms/cdb1a_iad154/CDB1A/trace/CDB1A_ora_49139.trc


SQL> column value clear
SQL> host tkprof &tracefile trace.txt

TKPROF: Release 20.0.0.0.0 - Development on Thu Jun 4 15:43:13 2020

Copyright (c) 1982, 2020, Oracle and/or its affiliates.  All rights reserved.

sql_trace instruments all executions with time and number of rows. tkprof aggregates those for analysis.

The trace shows two statements on my materialized view: DELETE and INSERT.

The first one is about removing the modified rows.


DELETE FROM "DEMO"."FLATVIEW"
WHERE
 "CASE_ROWID" = :1


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse      157      0.00       0.00          0          0          0           0
Execute    157      0.01       0.04         42        314        433         141
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total      314      0.01       0.04         42        314        433         141

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 634     (recursive depth: 1)
Number of plan statistics captured: 3

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          0  DELETE  FLATVIEW (cr=2 pr=1 pw=0 time=395 us starts=1)
         1          1          1   INDEX UNIQUE SCAN I_OS$_FLATVIEW (cr=2 pr=1 pw=0 time=341 us starts=1 cost=1 size=10 card=1)(object id 78628)

This has been done row-by-row but is optimized with an index on ROWID that has been created autonomously with my materialized view.

The second one is inserting the modified rows:


INSERT INTO  "DEMO"."FLATVIEW" SELECT "CASES".ROWID "CASE_ROWID",
  "COUNTRIES".ROWID "COUNTRY_ROWID","CONTINENTS".ROWID "CONTINENT_ROWID",
  "CASES"."DATEREP" "DATEREP","CONTINENTS"."CONTINENT_NAME" "CONTINENT_NAME",
  "COUNTRIES"."COUNTRY_NAME" "COUNTRY_NAME","CASES"."CASES" "CASES" FROM
  "CONTINENTS" "CONTINENTS","COUNTRIES" "COUNTRIES", (SELECT "CASES".ROWID
  "ROWID","CASES"."DATEREP" "DATEREP","CASES"."CASES" "CASES",
  "CASES"."COUNTRY_ID" "COUNTRY_ID" FROM "DEMO"."CASES" "CASES" WHERE
  "CASES".ROWID=(:Z)) "CASES" WHERE "CASES"."COUNTRY_ID"=
  "COUNTRIES"."COUNTRY_ID" AND "COUNTRIES"."CONTINENT_ID"=
  "CONTINENTS"."CONTINENT_ID" AND "CASES"."CASES">0


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse      157      0.00       0.01          0          0          0           0
Execute    157      0.01       0.02          0        755        606         141
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total      314      0.02       0.03          0        755        606         141

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 634     (recursive depth: 1)
Number of plan statistics captured: 3

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          0  LOAD TABLE CONVENTIONAL  FLATVIEW (cr=8 pr=0 pw=0 time=227 us starts=1)
         1          1          1   NESTED LOOPS  (cr=5 pr=0 pw=0 time=29 us starts=1 cost=3 size=47 card=1)
         1          1          1    NESTED LOOPS  (cr=3 pr=0 pw=0 time=20 us starts=1 cost=2 size=37 card=1)
         1          1          1     TABLE ACCESS BY USER ROWID CASES (cr=1 pr=0 pw=0 time=11 us starts=1 cost=1 size=17 card=1)
         1          1          1     TABLE ACCESS BY INDEX ROWID COUNTRIES (cr=2 pr=0 pw=0 time=9 us starts=1 cost=1 size=20 card=1)
         1          1          1      INDEX UNIQUE SCAN SYS_C009401 (cr=1 pr=0 pw=0 time=4 us starts=1 cost=0 size=0 card=1)(object id 78620)
         1          1          1    TABLE ACCESS BY INDEX ROWID CONTINENTS (cr=2 pr=0 pw=0 time=5 us starts=1 cost=1 size=10 card=1)
         1          1          1     INDEX UNIQUE SCAN SYS_C009399 (cr=1 pr=0 pw=0 time=2 us starts=1 cost=0 size=0 card=1)(object id 78619)

Again, a row-by-row insert apparently as the “execute count” is nearly the same as the “rows count”. 157 is the number of rows I have updated.

You may think that this is a huge overhead, but those operations are optimized for a long time. The materialized view is refreshed and ready for optimal queries: no need to queue, stream, reorg, vacuum,… And I can imagine that if this feature is used, it will be optimized with bulk operations which would allow compression.

Truncate

This looks all good. But… what happens if I truncate the table?


SQL> truncate table cases;

Table truncated.

SQL> select continent_name,country_name,top_date,top_cases from (
  2   select continent_name,country_name,daterep,cases
  3    ,first_value(daterep)over(partition by continent_name order by cases desc) top_date
  4    ,first_value(cases)over(partition by continent_name order by cases desc)top_cases
  5    ,row_number()over(partition by continent_name order by cases desc) r
  6    from flatview
  7   )
  8*  where r=1 order by top_cases;
   CONTINENT_NAME                COUNTRY_NAME      TOP_DATE    TOP_CASES
_________________ ___________________________ _____________ ____________
Oceania           Australia                   23/03/2020             611
Africa            South_Africa                30/05/2020            1837
Asia              India                       04/06/2020            9304
Europe            Russia                      02/06/2020           17898
America           United_States_of_America    26/04/2020           48529

Nothing changed. This is dangerous. You need to refresh it yourself. This may be a bug. What will happen if you insert data back? Note that, like with triggers, direct-path inserts will be transparently run as conventional inserts.

Joins are not expensive

This feature is really good to pre-build the joins in a composition of tables, as a hierarchical key-value, or snowflake dimension fact table. You can partition, compress, order, filter, index,… as with any relational table. There no risk here with the denormalization as it is transparently maintained when you update the underlying tables.

If you develop on a NoSQL database because you have heard that normalization was invented to reduce storage, which is not nexpensive anymore, that’s a myth (you can read this long thread to understand the origin of this myth). Normalization is about database integrity and separation lof logical and physical layers. And that’s what Oracle Database implements with this feature: you update the logical view, tables are normalized for integrity, and the physical layer transparently maintains additional structures like indexes and materialized views to keep queries under single-digit milliseconds. Today you still need to think about indexes and materialized views to build. Some advisors may help. All those are the bricks for the future: an autonomous database where you define only the logical layer for your application and all those optimisations will be done in background.

Cet article Oracle 12c – pre-built join index est apparu en premier sur Blog dbi services.

Different behavior of execution of procedure from server and plsql developer

Tom Kyte - 15 hours 21 min ago
Hi, The issue that I am going to explain here is bit weird and strange. Coming straight to the issue; There is a package which consists of 10 procedures. One of them calls rest 9 procedures one by one. In one of the 9 procedure, it throws exceptio...
Categories: DBA Blogs

Database Upgrade (with HSM Encryption) from 11.2.0.4 to 18.0.0 Fails

Tom Kyte - 15 hours 21 min ago
We are trying to upgrade the database from 11g to 18c. The wallet master key has been stored in HSM and the auto-login enabled in 11g Database. The wallet is opening in 11g Database without any issues. But, it is complaining that "Wallet-is-not-open"...
Categories: DBA Blogs

Returning a named constant in a SQL query

Tom Kyte - 15 hours 21 min ago
Is it possible to return a named constant in a SQL query? In other words, let's say we want to return the equivalent of: SELECT 'Business rule violation: '||to_char(bsnss_rule) FROM violation_table; But, we would like to pick up 'Business ...
Categories: DBA Blogs

Update On My Oracle Blogging Activity

Andrejus Baranovski - Wed, 2020-06-03 15:23
If you were following me, you probably noticed I stopped active blogging related to Oracle tech. I moved to Medium platform and writing Machine Learning related articles at Towards Data Science. I'm doing this already since late 2018. So, I didn't stop blogging, just the subject is changed. If you are interested in Machine Learning, I will be happy if you follow me on Medium.

Why I stopped blogging about Oracle? There are several reasons:

1. We are building our own product Katana ML
2. Machine Learning is a complex topic and requires lots of focus
3. I decided to dedicate my time to Machine Learning and Open Source

We still keep working in Red Samurai with Oracle technology, but probably you would not see Oracle related articles from me anymore. But then who knows, never say never.

Unity and Difference

Greg Pavlik - Wed, 2020-06-03 09:43
One of the themes that traveled from Greek philosophy through until the unfolding of modernity was the neoplatonic notion of "the One". A simple unity in which all "transcendentals" - beauty, truth, goodness - both originate and in some sense coalesce. In its patristic and medieval development, these transcendentals were "en-hypostasized" or made present in persons - the idea of the Trinity, where a communion of persons exist in perfect love, perfect peace and mutual self-offering: most importantly, a perfect unity in difference. All cultures have their formative myths and this particular myth made its mark on a broad swath of humanity over the centuries - though I think in ways that usually obscured its underlying meaning (unfortunately).

Now I have always identified with this comment of Dostoevsky: "I will tell you that I am a child of this century, a child of disbelief and doubt. I am that today and will remain so until the grave": sometimes more strongly than others. But myths are not about what we believe is "real" at any point in time. The meaning of these symbols I think says something for all of us today - particularly in the United States: that the essence of humanity may be best realized in a unity in difference that can only be realized through self-offering love. In political terms we are all citizens of one country and our obligation as a society is to care for each other. This much ought to be obvious - we cannot exclude one race, one economic class, one geography, one party, from mutual care. The whole point of our systems, in fact, ought to be to realize, however imperfectly, some level of that mutual care, of mutual up-building and mutual support.

That isn't happening today. Too often this we are engaged in the opposite - mutual tearing down and avoiding our responsibilities to each other. I wish there was a magic fix for this: it clearly has been a problem that has plagued our history for a long, long time. The one suggestion I can make is to find a way to reach out across boundaries with care on a day by day basis. It may seem like a person cannot make a difference. No individual drop of rain thinks it is responsible for the flood.

Functions in SQL with the Multitenant Containers Clause

Yann Neuhaus - Wed, 2020-06-03 08:27
By Clemens Bleile

To prepare a presentation about Multitenant Tuning I wanted to see the METHOD_OPT dbms_stats global preference of all my pluggable DBs. In this specific case I had 3 PBDs called pdb1, pdb2 and pdb3 in my CDB. For testing purposes I changed the global preference in pdb1 from its default ‘FOR ALL COLUMNS SIZE AUTO’ to ‘FOR ALL INDEXED COLUMNS SIZE AUTO’:

c##cbleile_adm@orclcdb@PDB1> exec dbms_stats.set_global_prefs('METHOD_OPT','FOR ALL INDEXED COLUMNS SIZE AUTO');
c##cbleile_adm@orclcdb@PDB1> select dbms_stats.get_prefs('METHOD_OPT') from dual;

DBMS_STATS.GET_PREFS('METHOD_OPT')
------------------------------------
FOR ALL INDEXED COLUMNS SIZE AUTO

Afterwards I ran my SQL with the containers clause from the root container:


c##cbleile_adm@orclcdb@CDB$ROOT> select con_id, dbms_stats.get_prefs('METHOD_OPT') method_opt from containers(dual);

    CON_ID METHOD_OPT
---------- --------------------------------
         1 FOR ALL COLUMNS SIZE AUTO
         3 FOR ALL COLUMNS SIZE AUTO
         4 FOR ALL COLUMNS SIZE AUTO
         5 FOR ALL COLUMNS SIZE AUTO

4 rows selected.

For CON_ID 3 I expected to see “FOR ALL INDEXED COLUMNS SIZE AUTO”. What is wrong here?

I actually got it to work with the following query:


c##cbleile_adm@orclcdb@CDB$ROOT> select con_id, method_opt from containers(select dbms_stats.get_prefs('METHOD_OPT') method_opt from dual);

    CON_ID METHOD_OPT
---------- ----------------------------------
         1 FOR ALL COLUMNS SIZE AUTO
         4 FOR ALL COLUMNS SIZE AUTO
         5 FOR ALL COLUMNS SIZE AUTO
         3 FOR ALL INDEXED COLUMNS SIZE AUTO

4 rows selected.

That is interesting. First of all I didn’t know that you can actually use SELECT-statements in the containers clause (according the syntax diagram it has to be a table or a view-name only) and secondly the function dbms_stats.get_prefs in the first example has obviously been called in the root container after getting the data.

I verified that last statement with a simple test by creating a function in all containers, which just returns the container id of the current container:


create or replace function c##cbleile_adm.show_con_id return number
as
conid number;
begin
     select to_number(sys_context('USERENV', 'CON_ID')) into conid from sys.dual;
     return conid;
  end;
/

And then the test:


c##cbleile_adm@orclcdb@CDB$ROOT> select con_id, show_con_id from containers(dual);

    CON_ID SHOW_CON_ID
---------- -----------
         1           1
         3           1
         4           1
         5           1

4 rows selected.

c##cbleile_adm@orclcdb@CDB$ROOT> select con_id, show_con_id from containers(select show_con_id from dual);

    CON_ID SHOW_CON_ID
---------- -----------
         4           4
         1           1
         3           3
         5           5

4 rows selected.

That proved that the function in the select-list of the first statement is actually called in the root container after getting the data from the PDBs.

Summary:
– be careful when running the containers-clause in a select-statement with a function in the select-list. You may get unexpected results.
– the syntax with a select-statement in the containers clause is interesting.

REMARK: Above tests have been performed with Oracle 19.6.

Cet article Functions in SQL with the Multitenant Containers Clause est apparu en premier sur Blog dbi services.

As Doors Reopen, Oracle Helps HR Teams Define What’s Next

Oracle Press Releases - Wed, 2020-06-03 08:00
Blog
As Doors Reopen, Oracle Helps HR Teams Define What’s Next

By Gretchen Alarcon, group vice president, Oracle HCM—Jun 3, 2020

Oracle Workforce Health and Safety

The COVID-19 pandemic has forced HR professionals into some of the most challenging times on record. Whether its adapting to new workforce demands, managing dispersed teams or maintaining positive employee experiences in a time of volatility, HR teams have taken center stage.

But the battle is not over yet. As the road to recovery continues, organizations are now having to figure out how to safely and efficiently bring their employees back into the workplace. And it’s up to HR teams to help make that happen. We at Oracle are here to help.

A few months ago, we announced free access to our Oracle Workforce Health & Safety tool for our HR customers to help keep their employees safe, and we’ve already seen it deliver tremendous value with many customers taking action. But we don’t want to stop there.

To provide additional help for the HR community, we are introducing the Oracle Employee Care Package for the New Workplace: a bundle of HCM tools and technology that will help HR teams make work safer, supportive, smarter, and more human as we re-enter the workplace. Within the Employee Care Package, we address three of the key areas for HR teams as they navigate the return to the work: safety, support, and skills.  

Prioritizing Workforce Safety

As offices begin to reopen, one of the top priorities for HR teams will without a doubt be ensuring the health and safety of their workforce. Which is why we’re including our Oracle Onboarding and Oracle Workforce Health & Safety tools in the Employee Care Package.

  • Oracle Workforce Health & Safety allows HR professionals to track and report health incidents within their teams (such as confirmed COVID-19 cases), minimize exposure to the larger workforce, and take appropriate action quickly.

  • And Oracle Onboarding ensures that employees who are coming back to the workplace are re-onboarded effectively, aligned with compliance, and set up for success and productivity.

Delivering Employee Support

HR teams will also need to be ready and equipped to deliver a supportive, and positive experience for their teams. Employees will likely be confused, curious, or even a little nervous when returning back to the workplace, and it’s up to HR teams to be their support and deliver confidence. Our Oracle HR Help Desk, Oracle Digital Assistant, and Oracle Work Life tools can help. 

  • With Oracle HR Help Desk, organizations can deliver an optimal employee experience by seamlessly managing COVID-19 issues, promptly resolving employee inquiries, and identifying trending issues among the workforce to resolve quickly.

  • Oracle Digital Assistant makes it easy for employees to get quick answers to commonly asked questions, whether they’re at home or at work, and through whatever channel they prefer (SMS, voice, social, etc.)

  • HR teams can use Oracle Work Life to strengthen their workplace community and empower employees to grow stronger together. This team camaraderie will improve employee productivity and well-being.

Zeroing in on Skills

As we enter into this “new normal,” we will see a heightened importance around employee learning and reskilling as new skills gaps emerge and more roles are ready to be filled.

  • With Oracle Learning, HR teams can help employees reskill and readjust to the new workplace. Recommended trainings will help employees improve productivity, acquire new skills, and discover new opportunities.

Now more than ever, HR teams are in the spotlight and driving their organizations to define what’s next. With the right tools in place, HR teams can help their workforce readjust to this changing world. And we’re here to help every step of the way.

For more information about Oracle’s Employee Care Package, click here: http://oracle.com/employee-care-package

Fetch First Update

Jonathan Lewis - Wed, 2020-06-03 07:48

A question about mixing the (relatively new) “fetch first” syntax with “select for update” appeared a few days ago on the Oracle Developer Forum. The requirement was for a query something like:


select
        *
from
        t1
order by
        n1
fetch
        first 10 rows only
for     update
;

The problem with this query is that it results in Oracle raising error ORA-02014: cannot select FOR UPDATE from view with DISTINCT, GROUP BY, etc. The error doesn’t seem to be particularly relevant, of course, until you remember that “fetch first” creates an inline view using the analytic row_number() under the covers.

One suggested solution was to use PL/SQL to open a cursor with a pure select then use a loop to lock each row in turn. This would need a little defensive programming, of course, since each individual “select for update” would be running at a different SCN from the driving loop, and there would be some risk of concurrency problems (locking, or competing data change) occuring.

There is a pure – thought contorted – SQL solution though where we take the driving SQL and put it into a subquery that generates the rowids of the rows we want to lock, as follows:


select
        /*+
                qb_name(main)
        */
        *
from
        t1
where
        t1.rowid in (
                select
                        /*+ qb_name(inline) unnest no_merge */
                        t1a.rowid
                from
                        t1 t1a
                order by
                        t1a.n1
                fetch 
                        first 10 rows only
        )
for update
;

The execution plan for this query is critical – so once you can get it working it would be a good idea to create a baseline (or SQL Patch) and attach it to the query. It is most important that the execution plan should be the equivalent of the following:


select  /*+   qb_name(main)  */  * from  t1 where  t1.rowid in (
select    /*+ qb_name(inline) unnest no_merge */    t1a.rowid   from
t1 t1a   order by    t1a.n1   fetch    first 10 rows only  ) for update

Plan hash value: 1286935441

---------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |      |      1 |        |     10 |00:00:00.01 |     190 |       |       |          |
|   1 |  FOR UPDATE                   |      |      1 |        |     10 |00:00:00.01 |     190 |       |       |          |
|   2 |   BUFFER SORT                 |      |      2 |        |     20 |00:00:00.01 |     178 |  2048 |  2048 | 2048  (0)|
|   3 |    NESTED LOOPS               |      |      1 |     10 |     10 |00:00:00.01 |     178 |       |       |          |
|*  4 |     VIEW                      |      |      1 |     10 |     10 |00:00:00.01 |     177 |       |       |          |
|*  5 |      WINDOW SORT PUSHED RANK  |      |      1 |  10000 |     10 |00:00:00.01 |     177 |  2048 |  2048 | 2048  (0)|
|   6 |       TABLE ACCESS FULL       | T1   |      1 |  10000 |  10000 |00:00:00.01 |     177 |       |       |          |
|   7 |     TABLE ACCESS BY USER ROWID| T1   |     10 |      1 |     10 |00:00:00.01 |       1 |       |       |          |
---------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - filter("from$_subquery$_003"."rowlimit_$$_rownumber"<=10)
   5 - filter(ROW_NUMBER() OVER ( ORDER BY "T1A"."N1")<=10)

Critically you need the VIEW operation to be the driving query of a nested loop join that does the “table access by user rowid” joinback. In my case the query has used a full tablescan to identify the small number of rowids needed – in a production system that would be the part of the statement that should first be optimised.

It’s an unfortunate feature of this query structure (made messier by the internal rewrite for the analytic function) that it’s not easy to generate a correct set of hints to force the plan until after you’ve already managed to get the plan. Here’s the outline information that shows the messiness of the hints I would have needed:


Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('12.2.0.1')
      DB_VERSION('12.2.0.1')
      ALL_ROWS
      OUTLINE_LEAF(@"INLINE")
      OUTLINE_LEAF(@"SEL$A3F38ADC")
      UNNEST(@"SEL$1")
      OUTLINE(@"INLINE")
      OUTLINE(@"MAIN")
      OUTLINE(@"SEL$1")
      NO_ACCESS(@"SEL$A3F38ADC" "from$_subquery$_003"@"SEL$1")
      ROWID(@"SEL$A3F38ADC" "T1"@"MAIN")
      LEADING(@"SEL$A3F38ADC" "from$_subquery$_003"@"SEL$1" "T1"@"MAIN")
      USE_NL(@"SEL$A3F38ADC" "T1"@"MAIN")
      FULL(@"INLINE" "T1A"@"INLINE")
      END_OUTLINE_DATA
  */

You’ll notice that my /*+ unnest */ hint is now modified – for inclusion at the start of the query – to /*+ unnest(@sel1) */ rather than the /*+ unnest(@inline) */ that you might have expected. That’s the side effect of the optimizer doing the “fetch first” rewrite before applying “missing” query block names. If I wanted to write a full hint set into the query itself (leaving the qb_name() hints in place but removing the unnest and merge I had originally) I would need the following:


/*+
        unnest(@sel$1)
        leading(@sel$a3f38adc from$_subquery$_003@sel$1 t1@main)
        use_nl( @sel$a3f38adc t1@main)
        rowid(  @sel$a3f38adc t1@main)
*/

I did make a bit of a fuss about the execution plan. I think it’s probably very important that everyone who runs this query gets exactly the same plan and the plan should be this nested loop. Although there’s a BUFFER SORT at operation 2 that is probably ensuring that every would get the same data in the same order regardless of the execution plan before locking any of it, I would be a little worried that different plans might somehow be allowed to lock the data in a different order, thus allowing for deadlocks.

Block Corruption

Tom Kyte - Wed, 2020-06-03 07:46
Oracle 8i has some new packages for detecting and solving issues related to block corruption. My question is how does block corruption happen when operating system most of the time would have bad cluster remapping?
Categories: DBA Blogs

unique index with null values

Tom Kyte - Wed, 2020-06-03 07:46
Hello Tom, I have this situation: With a table like create table test (id number not null, name varchar2(10) not null, source_id number); (actually the real tables have more columns, but for this question these are enough) and with th...
Categories: DBA Blogs

SQL*Loader - How can I put header info on each record

Tom Kyte - Wed, 2020-06-03 07:46
Hi, I have a question regarding SQL*Loader. I am parsing a datafile that has a header record that contains information that I need to store on each record being inserted into the database. My data file looks something like: BOF - 09/01/2003 ...
Categories: DBA Blogs

Introduction to Azure SQL Database Auto-failover groups

Yann Neuhaus - Wed, 2020-06-03 01:30

SQL Azure Database by default offers a 99.99% availability SLA across all its service tiers. This means that for any database, the downtime should not exceed 52 minutes per year. Using Zone redundancy increases availability to 99.995% which is about 26 minutes per year.

These impressive numbers can be achieved through in-region redundancy of the compute and storage resources and automatic failover within the region.

Some disruptive events may impact the region’s availability like Datacenter outage, possibly caused by a natural disaster.
To protect against a Region disaster, Auto-failover groups can be configured to automatically failover one or multiple databases to another region.

The technology behind Auto-failover group is the same as geo-replication but they are some differences.

In this blog post, we will configure an Auto-failover group for the database we previously created in a previous post.

So I start with the simple configuration of multiple Azure SQL Databases on a single server.

SSMS_Azure_Database

Create a new server in another region

First, we need to create our backup server in another Azure region, I will choose Japan East.

New-AzSqlServer -ResourceGroupName 'SQLFailover-lab-rg' `
    -ServerName 'snasqlsrv-lab-02' `
    -Location 'Japan East' `
    -SqlAdministratorCredentials $(New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList "LabSqlAdmin", $(ConvertTo-SecureString -String "MyPassword" -AsPlainText -Force))

I now have my second server ready with no databases except of master.

Get-AzResource | Where ResourceType -like '*Sql*' | ft

Name                    ResourceGroupName  ResourceType                    Location
----                    -----------------  ------------                    --------
snasqlsrv-lab-01        SQLFailover-lab-rg Microsoft.Sql/servers           francecentral
snasqlsrv-lab-01/DB01   SQLFailover-lab-rg Microsoft.Sql/servers/databases francecentral
snasqlsrv-lab-01/DB02   SQLFailover-lab-rg Microsoft.Sql/servers/databases francecentral
snasqlsrv-lab-01/master SQLFailover-lab-rg Microsoft.Sql/servers/databases francecentral
snasqlsrv-lab-02        SQLFailover-lab-rg Microsoft.Sql/servers           japaneast
snasqlsrv-lab-02/master SQLFailover-lab-rg Microsoft.Sql/servers/databases japaneast
Create the Auto-Failover group
New-AzSqlDatabaseFailoverGroup -ResourceGroupName 'SQLFailover-lab-rg' `
>>     -ServerName 'snasqlsrv-lab-01' -PartnerServerName 'snasqlsrv-lab-02' `
>>     -FailoverGroupName 'sqlfailover-lab-fg'

FailoverGroupName                    : sqlfailover-lab-fg
Location                             : France Central
ResourceGroupName                    : SQLFailover-lab-rg
ServerName                           : snasqlsrv-lab-01
PartnerLocation                      : Japan East
PartnerResourceGroupName             : SQLFailover-lab-rg
PartnerServerName                    : snasqlsrv-lab-02
ReplicationRole                      : Primary
ReplicationState                     : CATCH_UP
ReadWriteFailoverPolicy              : Automatic
FailoverWithDataLossGracePeriodHours : 1
DatabaseNames                        : {}

I now have an Automatic Failover group between my 2 regions.

Add databases to Auto-Failover group

There are no databases in the group yet. Let’s add them.

$database = Get-AzSqlDatabase -ResourceGroupName 'SQLFailover-lab-rg' -ServerName 'snasqlsrv-lab-01' -DatabaseName 'DB01'
Add-AzSqlDatabaseToFailoverGroup -ResourceGroupName 'SQLFailover-lab-rg' -ServerName 'snasqlsrv-lab-01' `
    -FailoverGroupName 'sqlfailover-lab-fg' -Database $database

The script above adds a single database to the Failover group. The script below will add all databases on the primary server.

$server01 = Get-AzSqlServer -ResourceGroupName 'SQLFailover-lab-rg' -ServerName 'snasqlsrv-lab-01'
$server01 | Add-AzSqlDatabaseToFailoverGroup -FailoverGroupName 'sqlfailover-lab-fg' -Database ($server01 | Get-AzSqlDatabase)

I can now see my two databases on my Failover group;

Get-AzSqlDatabaseFailoverGroup -FailoverGroupName 'sqlfailover-lab-fg' -ResourceGroupName 'SQLFailover-lab-rg' -ServerName 'snasqlsrv-lab-01'


FailoverGroupName                    : sqlfailover-lab-fg
Location                             : France Central
ResourceGroupName                    : SQLFailover-lab-rg
ServerName                           : snasqlsrv-lab-01
PartnerLocation                      : Japan East
PartnerResourceGroupName             : SQLFailover-lab-rg
PartnerServerName                    : snasqlsrv-lab-02
ReplicationRole                      : Primary
ReplicationState                     : CATCH_UP
ReadWriteFailoverPolicy              : Automatic
FailoverWithDataLossGracePeriodHours : 1
DatabaseNames                        : {DB01, DB02}

An overview of the group with a map is now available on Azure Portal.

AzurePortal_FG

Listeners

The Auto-Failover provides 2 listeners. The first one for read-write OLTP type workload. The second one for read-only connection.
The DNS records are updated automatically to redirect to the correct server after a role change.

  • Read-write: <failovergroup-name>.database.windows.net
  • Read-only: <failovergroup-name>.secondary.database.windows.net

Connecting to my Read-only listener I can indeed read data but not modify it.

Msg 3906, Level 16, State 2, Line 6
Failed to update database "DB01" because the database is read-only.
Check the status of the Auto-Failover group

As we have seen already we can look at the status in PowerShell with Get-Az

(Get-AzSqlDatabaseFailoverGroup `
   -FailoverGroupName 'sqlfailover-lab-fg' `
   -ResourceGroupName 'SQLFailover-lab-rg' `
   -ServerName 'snasqlsrv-lab-01').ReplicationRole

Some information about geo-replication are available in SQL DMVs. Here I use sys.geo_replication_links in the master database.

select d.name, @@SERVERNAME AS ServerName
	, replication_state_desc, role_desc, partner_server 
from sys.geo_replication_links AS grl
	join sys.databases AS d
		on grl.database_id = d.database_id

Manual Failover

A failover can be done manually using the Switch-AzSqlDatabaseFailoverGroup command.
Before doing it, let’s simply from SQL which server is now primary when I connect to my read-write listener;

My primary server is the 01. Let’s Failover.

Switch-AzSqlDatabaseFailoverGroup -ResourceGroupName 'SQLFailover-lab-rg' `
   -ServerName 'snasqlsrv-lab-02' `
   -FailoverGroupName 'sqlfailover-lab-fg'

After a few seconds, my read-write listener now redirects my connection to server 02.

 

In this blog post, we have seen how to configure an Auto-failover group for Azure SQL Database.
This is definitely something that would have been more complicated to do on-premise.

 

Cet article Introduction to Azure SQL Database Auto-failover groups est apparu en premier sur Blog dbi services.

AWS RDS: 5 Must-Know Actions for Oracle DBAs

Pythian Group - Wed, 2020-06-03 00:00

Managing Oracle on AWS has some twists. Here are five daily DBA activities that have changed on AWS:

Kill Sessions:

begin
rdsadmin.rdsadmin_util.kill(
sid => &sid,
serial => &serial,
method => 'IMMEDIATE');
end;
/

 

Flush shared_pool or buffer_cache:

exec rdsadmin.rdsadmin_util.flush_shared_pool;
exec rdsadmin.rdsadmin_util.flush_buffer_cache;

 

Perform RMAN Operations:

BEGIN
 rdsadmin.rdsadmin_rman_util.validate_database(
 p_validation_type => 'PHYSICAL+LOGICAL',
 p_parallel => 4,
 p_section_size_mb => 10,
 p_rman_to_dbms_output => FALSE);
END;
/

 

Grant Privileges to SYS Objects

# Grant

begin
    rdsadmin.rdsadmin_util.grant_sys_object(
        p_obj_name  => 'V_$SESSION',
        p_grantee   => 'PYTHIAN',
        p_privilege => 'SELECT');
end;
/

# Grant with Grant Option

begin
    rdsadmin.rdsadmin_util.grant_sys_object(
        p_obj_name     => 'V_$SESSION',
        p_grantee      => 'PYTHIAN',
        p_privilege    => 'SELECT',
        p_grant_option => true);
end;
/

# Revoke

begin
    rdsadmin.rdsadmin_util.revoke_sys_object(
        p_obj_name  => 'V_$SESSION',
        p_revokee   => 'PYTHIAN',
        p_privilege => 'SELECT');
end;
/

 

Create Custom Functions to Verify Passwords:

begin
    rdsadmin.rdsadmin_password_verify.create_verify_function(
        p_verify_function_name => 'CUSTOM_PASSWORD_FUNCTION', 
        p_min_length           => 12, 
        p_min_uppercase        => 2, 
        p_min_digits           => 1, 
        p_min_special          => 1,
        p_disallow_at_sign     => true);
end;
/

If you want to double-check the generated code, here’s simple trick: Check on DBA_SOURCE:

col text format a150
select TEXT  from DBA_SOURCE 
where OWNER = 'SYS' and NAME = 'CUSTOM_PASSWORD_FUNCTION' order by LINE;

I hope this helps!

Categories: DBA Blogs

GoldenGate – Supplemental Logging Is A Mess

Michael Dinh - Tue, 2020-06-02 22:22

I was tasked to find supplemental logging details for Oracle database used with GoldenGate.

Note: this is not a pluggable database.

With ADD TRANDATA, use dba_log_groups and dba_log_group_columns.

With ADD SCHEMATRANDATA, use select * from table(logmnr$always_suplog_columns( SCHEMA, TABLE ));

Basically, one would need to run the query with logmnr pipeline function for all the tables in the schema.

Here is one process I used.

Create info_schematrandata.prm

$ cat info_schematrandata.prm
dblogin USERID ggs, PASSWORD *
info schematrandata *

Run ggsci using info_schematrandata.prm (full path is required)

$ ggsci paramfile /home/oracle/working/dinh/info_schematrandata.prm > info_schematrandata.log

Here is example for results (actual contains 12 schemas)

$ grep -i enable info_schematrandata.log
2020-06-01 05:19:35  INFO    OGG-06480  Schema level supplemental logging, excluding non-validated keys, is enabled on schema "SCOTT".
2020-06-01 05:19:35  INFO    OGG-01981  Schema level supplemental logging is enabled on schema "SCOTT" for all columns.

After finding the schemas, use logmnr pipeline function to find all the details.

select * from table(logmnr$always_suplog_columns('SCOTT','EMP')) order by intcol;
select * from table(logmnr$always_suplog_columns('SCOTT','BONUS')) order by intcol;
select * from table(logmnr$always_suplog_columns('SCOTT','DEPT')) order by intcol;

You can find demo with logmnr$always_suplog_columns at: GoldenGate 12c Features Found in 11.2.1.0.21 ???

References:

OGG: How To Log All Columns With Add Schematrandata To Get NOCOMPRESSUPDATES (Doc ID 1413142.1)

ADD SCHEMATRANDATA does not allow columns to be specified.
This enables logging of Primary Key columns only.
By default, updates are compressed.
In order to log all columns ADD TRANDATA would have to be used.
The ADD TRANDATA can be used in conjunction with ADD SCHEMATRANDATA to specify the non-primary key columns.

How to Check Supplemental Logging When ADD SCHEMATRANDATA is Enabled (Doc ID 1537837.1)

It is not listed in dba_log_groups or dba_log_group_columns.
select * from table(logmnr$always_suplog_columns( SCHEMA, TABLE ));

Effects of ADD TRANDATA and ADD SCHEMATRANDATA on an Oracle databases’ Supplemental Logging (Doc ID 2070331.1)

Some useful commands from ggsci:

INFO TRANDATA [container.]owner.table (info trandata *) did not work
INFO SCHEMATRANDATA schema            (info schematrandata *)
LIST TABLES table                     (list tables SCOTT.*)

Note to self:

$ cat list_table.prm
dblogin USERID ggs, PASSWORD *
list tables SCOTT.*

$ ggsci paramfile /home/oracle/working/dinh/list_table.prm > list_table.log

$ grep '\.' list_table.log | egrep -iv 'found|ggsci'| grep -A 10000 "Successfully logged into database."|grep -v database > table.log

$ cat table.log
SCOTT.EMP
SCOTT.BONUS
SCOTT.DEPT

$ cat read.sh
#!/bin/bash
IFS="."
while read f1 f3
do
echo "select * from table(logmnr\$always_suplog_columns('$f1','$f3')) order by intcol;"
done < /home/oracle/working/dinh/table.log
exit

$ ./read.sh > /tmp/suplog.sql

$ head /tmp/suplog.sql
select * from table(logmnr$always_suplog_columns('SCOTT','EMP')) order by intcol;
select * from table(logmnr$always_suplog_columns('SCOTT','BONUS')) order by intcol;
select * from table(logmnr$always_suplog_columns('SCOTT','DEPT')) order by intcol;

$ cat suplog.sql
set numw 8 lines 200 timing off echo off pages 10000 trimsp on tab off
column NAME_COL_PLUS_SHOW_PARAM format a30
column VALUE_COL_PLUS_SHOW_PARAM format a65 wrap
col owner for a20
col table_name for a20
col column_name for a30
col log_group_type for a20
col column_list for a80
col log_group_name for a30
col table_name for a30
spool Database_Supplemental_Logging_Details.log
pro ******** Database ********
SELECT
name,db_unique_name,open_mode,database_role,remote_archive,switchover_status,dataguard_broker,primary_db_unique_name
FROM v$database
;
pro ******** Database Supplemental Logging ********
SELECT
supplemental_log_data_min MIN,
supplemental_log_data_pk PK,
supplemental_log_data_ui UI,
supplemental_log_data_fk FK,
supplemental_log_data_all "ALL"
FROM v$database
;
pro ******** Table Supplemental Logging ********
pro
pro ******** GoldenGate: ADD TRANDATA ********
SELECT
g.owner, g.table_name, g.log_group_name, g.log_group_type,
DECODE(always,'ALWAYS','Unconditional',NULL,'Conditional') always,
LISTAGG(c.column_name, ', ') WITHIN GROUP (ORDER BY c.POSITION) column_list
FROM dba_log_groups g, dba_log_group_columns c
WHERE g.owner = c.owner(+)
AND g.log_group_name = c.log_group_name(+)
AND g.table_name = c.table_name(+)
GROUP BY g.owner, g.log_group_name, g.table_name, g.log_group_type, DECODE(always,'ALWAYS','Unconditional',NULL,'Conditional')
ORDER BY g.owner, g.log_group_name, g.table_name, g.log_group_type
;
pro ******** Schema Supplemental Logging ********
pro
pro ******** GoldenGate: ADD SCHEMATRANDATA ********
@/tmp/suplog.sql
exit

Introduction to Apache Kafka

Gerger Consulting - Tue, 2020-06-02 13:03
Apache Kafka is a product that should be in every IT professional's toolbox.

Attend the webinar by developer advocate Ricardo Ferreira and learn how to use Kafka in your daily work


About the Webinar:

The use of distributed streaming platforms is becoming increasingly popular among developers, but have you ever wondered why?
Part Pub/Sub messaging system, partly distributed storage, partly CEP-type event processing engine, the usage of this type of technology brings a whole new perspective on how developers capture, store, and process events. This talk will explain what distributed streaming platforms are and how it can be a game changer for modern data architectures. We'll discuss the road in IT that led to the need of this type of platform, the current state of Apache Kafka, as well as scenarios where this technology can be implemented.


About the Presenter:
Ricardo is a Developer Advocate at Confluent, the company founded by the original co-creators of Apache Kafka. He has over 22 years of experience working with software engineering and specializes in service-oriented architecture, big data, cloud, and serverless architecture. Prior to Confluent, he worked for other vendors, such as Oracle, Red Hat, and IONA Technologies, as well as several consulting firms. While not working he enjoys grilling steaks on his backyard with his family and friends, where he gets the chance to talk about anything that is not IT related. Currently, he lives in Raleigh, North Carolina, with his wife, and son.
Categories: Development

New SWOT Report: Running Mission-Critical Workloads In Multi-Cloud Environments Is Oracle’s Super Power

Oracle Press Releases - Tue, 2020-06-02 07:00
Blog
New SWOT Report: Running Mission-Critical Workloads In Multi-Cloud Environments Is Oracle’s Super Power

By Sasha Banks-Louie, Oracle—Jun 2, 2020

Multi-cloud environments are becoming the de-facto cloud strategy among a majority of US businesses that have moved their applications to the cloud, but managing these complex infrastructures is creating new challenges that many companies are struggling to surmount—if they decide to move to the cloud at all.

These are among the key conclusions from a new report by Omdia Consulting, which analyzes the strengths, weaknesses, opportunities, and threats of running workloads in Oracle Cloud Infrastructure.  

Key findings from the Omdia SWOT report include:

  • More than 52% of businesses report the inability to move workloads between clouds is slowing their adoption of cloud computing
  • The alliance between Oracle Cloud Infrastructure and Microsoft Azure plans to speed up cloud adoption, by offering businesses direct interconnection between these two clouds, integrating identity management, and honoring a collaborative support agreement
  • Oracle’s open, enterprise-grade cloud architecture not only provides businesses with near zero downtime and no cost to onboard and offboard users, it also offers the most comprehensive sets of security standards and customer support services compared to competing cloud vendors
 

While most cloud infrastructure vendors offer companies an environment on which to run their mission-critical applications without having to manage a data center, invest in hardware, or install and update software, those vendors’ service, pricing, and support plans can vary widely.

In its recently published SWOT Assessment of Oracle Cloud Infrastructure, Omdia Consulting offers new insight into why companies should consider running their mission-critical workloads in the Oracle Cloud.   

Because Oracle Cloud Infrastructure has built a reputation for reliability, companies are guaranteed more than 99.99% availability uptime, and fewer than four minutes per month for maintenance work, the report says.

Such high availability is particularly important, because banks that can’t process high-speed financial transactions or retailers who aren’t able to synchronize their ecommerce websites with their on-hand inventories and point-of-sale data, can lose revenue, frustrate customers, and damage their brands.

Mastering Multi-Cloud Environments

As an increasing number of businesses today live in a multi-cloud world, it’s important for cloud vendors to integrate their offerings with those of their competitors.

The Oracle and Microsoft alliance announced in June 2019 enables joint customers to deploy mission-critical enterprise workloads that span both Microsoft Azure and Oracle Cloud Infrastructure environments.

Such customers can run Azure analytics and AI, for example with Oracle Autonomous Database on the same workload. This not only makes it easier for companies to have a backup cloud to aid in disaster recovery, but also to split up workloads so that data architects and application developers can choose their preferred environments and tools.

The Oracle and Microsoft alliance also removes the burden of managing multiple service orders, networking configurations, and data transfers from different clouds across workloads.

Raising The Bar For Security Standards

The range of standards that Oracle provides compliance with is one of the most comprehensive among the leading cloud providers, according to Omdia’s SWOT report.

While currently compliant with ISO 27001, SOC1, SOC2, PCI DSS, HIPAA/HITECH, FedRAMP Medium, and FedRAMP High, Oracle Cloud Infrastructure also follows a media destruction process adhering to NIST SP 800-88r1 and DoD emergency destruction and secret classification standards.

A new feature in Oracle’s Gen 2 Cloud is Isolated Network Virtualization, which isolates physical network interfaces and cards from each other, isolating an attacker who has gained unauthorized access to the network. Through this process, Oracle helps companies protect against bad actors from attacking their networks when an instance, bare-metal, virtual-machine, or container, has been compromised.

Gaining Share Through Human Customer Support

While all cloud infrastructure vendors let their customers access online documentation and community forums for free, many of those vendors charge hefty fees for hands-on, expert support—like the kind you’ll need to fix a latency problem or network outage.

But companies running their workloads on Oracle Cloud Infrastructure Free Tier, can get an enterprise-level support package, which includes two Oracle Autonomous Databases with powerful tools like Oracle Application Express (APEX) and Oracle SQL Developer, two Oracle Cloud Infrastructure Compute virtual machines, block, object, and archive storage, load balancer and data egress, and monitoring and modifications—for free.

It’s this kind of human customer care that has driven approximately 80% of Oracle’s customers to stay in the Oracle Cloud for between one and three years, and 21% of them committing to three-year subscriptions, according to Omdia’s SWOT analysis. The report also shows more than 50% of Oracle’s customers increase their spend once they have moved to Oracle Cloud Infrastructure, and the rate of new customers moving to Oracle Cloud is more than 150% year on year.

This level premium support, including zero fees for onboarding or offboarding customers to its eponymous Cloud Infrastructure, demonstrates Oracle’s strong commitment to be an open enterprise-grade cloud—earning its position as a top-five cloud provider in the world.

We live in a multi-cloud world and customers expect cloud providers to excel at interconnecting various platforms, applications, and workloads. Omdia’s report highlights several critical elements that make Oracle uniquely qualified to provide this interconnectivity, while offering exceptional performance, pricing, and support.

Click here for more Industry Analyst Research.

Oracle Cloud Infra [1Z0-1072] Certification For Architects

Online Apps DBA - Tue, 2020-06-02 05:38

Hear us when we say Oracle Cloud (OCI) is everywhere whether it is Zoom, 8×8, Air Asia, Michelin, Vodafone, Xerox, Airtel, Netflix & many more. Things are moving from On-Premise to Cloud and so are DBAs, Apps DBAs, System Administrators, Network, Storage & Security Admins & Architects, and everyone working On-Premise. The 1Z0-1072- Oracle Cloud […]

The post Oracle Cloud Infra [1Z0-1072] Certification For Architects appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

How to create an Azure SQL Database using Azure PowerShell

Yann Neuhaus - Tue, 2020-06-02 00:56

In this blog post, I’ll go through the steps to create an Azure SQL Database using Azure PowerShell.

Introduction to Azure SQL Database

The SQL database services provided by Microsoft on the cloud are now grouped under the name of Azure SQL.

The Azure SQL family contains services that I will briefly summarize;

  • Azure SQL Database – DBaaS (Database-as-a-Service)
  • Azure SQL Managed Instance – PaaS (Platform-as-a-Service)
  • SQL Server on Azure VMs – IaaS (Infrastructure-as-a-Service)

In this blog post, I will use Azure SQL Database.

Azure SQL Database offers the following deployment options:

  • Single database – a fully-managed, isolated database
  • Elastic pool – a collection of single databases with a shared set of resources

I will not describe in detail this service but basically, it is a fully managed SQL database similar to a contained database in SQL Server.

All the steps below can be done on the Azure Portal. For this blog post, I’ll only use Azure PowerShell which you can install on your operating system or use online with Azure Cloud Shell.

1) Install and Import Az module

First, we need to install Azure PowerShell which provides a set of commands to manage your Azure resources from your favorite operating system; Windows, macOS, and Linux.

PS C:\> Install-Module Az
PS C:\> Get-InstalledModule -Name Az | select Name, Version

Name Version
---- -------
Az   4.1.0

PS C:\> Import-Module Az
2) Sign in to Azure

Connect to your Tenant using your Tenant ID.
You can find your Tenant ID in the Azure Portal under “Azure Active Directory”.Azure Active Directory

PS C:\> Connect-AzAccount -Tenant 'b9c70123-xxx-xxx-xxx-xxxx'

Account           SubscriptionName     TenantId                      Environment
-------           ----------------     --------                      -----------
my@Email.com      Visual Studio Ent    b9c70978-xxx-xxx-xxx-xxxx     AzureCloud

PS C:\>

Then, if you use multiple Azure subscriptions, select the one you want to work with.

PS C:\> Set-AzContext -SubscriptionId '891f5acc-xxx-xxx-xxx-xxxx'
3) Create a Resource Group

Let’s start with creating a Resource Group. A resource group is a container that holds related resources for an Azure solution.

PS C:\> New-AzResourceGroup -Name 'SQLFailover-lab-rg' -Location 'France Central'

ResourceGroupName : SQLFailover-lab-rg
Location          : francecentral
ProvisioningState : Succeeded
Tags              :
ResourceId        : /subscriptions/891f5acc-xxx-xxx-xxx-xxxx/resourceGroups/SQLFailover-lab-rg

To list all your Resource Groups use the Get-AzResourceGroup command:

Get-AzResourceGroup | select ResourceGroupName
4) Create an SQL Server

Create a logical server with a unique server name to host our SQL databases.

New-AzSqlServer -ResourceGroupName 'SQLFailover-lab-rg' `
    -ServerName 'snasqlsrv-lab-01' `
    -Location 'France Central' `
    -SqlAdministratorCredentials $(New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList "LabSqlAdmin", $(ConvertTo-SecureString -String "MyPassword" -AsPlainText -Force))

The last parameter defines the credentials I will use to connect as an administrator to my SQL Database server.
Once the server is created you get the FQDN that will be used for connections.

PS C:\> Get-AzSqlServer | select FullyQualifiedDomainName

FullyQualifiedDomainName
------------------------
snasqlsrv-lab-01.database.windows.net
5) Create a Server Firewall Rule

To access the server and all the databases from my client computer I need to create a server firewall rule.
Here I use a WebRequest to get my public IP into a variable and then create the server firewall rule.

$myIp = (Invoke-WebRequest ifconfig.me/ip).Content
New-AzSqlServerFirewallRule -ResourceGroupName 'SQLFailover-lab-rg' `
    -ServerName 'snasqlsrv-lab-01' `
    -FirewallRuleName "AllowedIPs" -StartIpAddress $myIp -EndIpAddress $myIp
6) Connect to the SQL Server from SSMS

The SQL Server is now accessible from my computer client on port 1433. I can connect to it using SSMS.

ConnectSSMS
SSMS

7) Create a database

The following command will create a database named “DB01” with an S0 performance level and using the sample schema “AventureWorksLT”.

New-AzSqlDatabase  -ResourceGroupName 'SQLFailover-lab-rg' `
    -ServerName 'snasqlsrv-lab-01' `
    -DatabaseName 'DB01' `
    -RequestedServiceObjectiveName "S0" `
    -SampleName "AdventureWorksLT"

This is it. We just a created an Azure SQL Database with a few commands.

Bonus: Creating a Copy of the database

I just want to mention a nice T-SQL command with Azure SQL Database that doesn’t exist on-premise: “CREATE DATABASE AS A COPY”.
This command creates a copy of a database with a new name. This replace the backup/”restore with move” that we do sometimes on SQL Server.

Cleanup

When you’re done with your tests you can delete all resources in the resource group (firewall rules, server, databases) with a single command;

PS C:\> Remove-AzResourceGroup -ResourceGroupName 'SQLFailover-lab-rg'

 

 

Cet article How to create an Azure SQL Database using Azure PowerShell est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator