Tom Kyte
Why is json_array_t using 0-based indexing
It took me 25 years to get used to Oracle using 1-based indexing in pretty much all API's.
How the rather new json_array_t data structure used a 0-based indexing and drives me crazy.
Is there any reason behind this "strange anomaly" or did someone just want to drive people crazy?
The following example only returns 2 and 3 because it must be written "FOR i IN 0 .. c_json.get_size() - 1 LOOP ":
<code>
DECLARE
c_json CONSTANT json_array_t := json_array_t('[1, 2, 3]');
BEGIN
FOR i IN 1 .. c_json.get_size() LOOP
dbms_output.put_line(c_json.get_number(i));
END LOOP;
END;
/
</code>
Categories: DBA Blogs
Converting column number values into array number values in SQL
I have a table like below.
<code>create table t2 ( id varchar2(1),val number) ;
insert into t2 values ('a',1);
insert into t2 values ('a',2);
insert into t2 values ('a',3);
insert into t2 values ('a',4);
insert into t2 values ('b',1);
insert into t2 values ('b',2);
insert into t2 values ('b',3);
insert into t2 values ('c',1);
insert into t2 values ('c',2);
insert into t2 values ('c',4);
insert into t2 values ('d',1);
insert into t2 values ('d',2);</code>
we have to print o/p like below.
<code>id x
--- -------
a 1,2,3,4
b 1,2,3
c 1,2,4
d 1,2</code>
this can achieve by below query
<code>select id,LISTAGG(val, ',') WITHIN GROUP (ORDER BY val ) as x
from t2
group by id</code>
Here x column is character datatype .But i need to convert this to varray of number / nested table of number ( not varray/nestedtable of character ).
i tried like below
<code>CREATE TYPE varchar_TT AS TABLE OF varchar(10);
with z as (
select id,varchar_TT(LISTAGG(val, ',') WITHIN GROUP (ORDER BY val)) as x ,varchar_TT('1,2') y
from t2
group by id )
select id , x ,y from z ;
o/p
----
id x y
---- ------------- ---------------
a C##SIVA.<b>VARCHAR_TT('1,2,3,4')</b>C##SIVA.VARCHAR_TT('1,2')
b C##SIVA.<b>VARCHAR_TT('1,2,3') </b>C##SIVA.VARCHAR_TT('1,2')
c C##SIVA.<b>VARCHAR_TT('1,2,4') </b>C##SIVA.VARCHAR_TT('1,2')
d C##SIVA.<b>VARCHAR_TT('1,2') </b>C##SIVA.VARCHAR_TT('1,2')</code>
if i add below condition , i am not getting any result .
<b>where y member of x ;</b>
so i tried to convert to number array .
<code>CREATE TYPE number_TT AS TABLE OF number;
with z as (
select id,number_TT(LISTAGG(val, ',') WITHIN GROUP (ORDER BY val)) as x ,number_TT(1,2) y
from t2
group by id )
select id , x ,y from z ;
ORA-01722: invalid number
01722. 00000 - "invalid number"
*Cause: The specified number was invalid.
*Action: Specify a valid number.</code>
1 ) Here i need o/p like below to use <b><u>member</u></b> and <b><u>submultiset</u></b> conditions.
<code>o/p
----
id x y
---- ------------- ---------------
a C##SIVA.<b>NUMBER_TT(1,2,3,4) </b>C##SIVA.NUMBER_TT(1,2)
b C##SIVA.<b>NUMBER_TT(1,2,3) </b>C##SIVA.NUMBER_TT(1,2)
c C##SIVA.<b>NUMBER_TT(1,2,4) </b>C##SIVA.NUMBER_TT(1,2)
d C##SIVA.<b>NUMBER_TT(1,2) </b>C##SIVA.NUMBER_TT(1,2)
select varchar_tt('1,2') x ,number_TT(1,2) y from dual;
x y
-------------------- ----------------
C##SIVA.VARCHAR_TT('1,2') C##SIVA.NUMBER_TT(1,2)</code>
Please let me know how to convert character array to number array .
2)
<code>create table t4 ( id VARCHAR2(1) , val number_tt )
NESTED TABLE val STORE AS val_2 ;</code>
How to insert into t4 table from t2 ?
expected o/p query of t4 table should be like...
Categories: DBA Blogs
why view's trigger disappear?
Got information from your archives, BUT solution is not provided or there's no solution???
Archive : "Why the trigger disappears... May 28, 2003
Reviewer: Kamal Kishore from New Jersey, USA "
Hi Tom,
After you re-create the view definition using CREATE OR REPLACE (maybe to change its condition), the trigger on the view disappears. Is this expected behaviour?
SQL> create or replace view emp_view
2 as
3 select * from emp
4
SQL> /
View created.
SQL> create or replace trigger trig_emp_view
2 instead of insert or update on emp_view
3 for each row
4 begin
5 Null ;
6 end ;
7 /
Trigger created.
SQL> show errors
No errors.
SQL> select ut.trigger_name, ut.table_owner, ut.table_name
2 from user_triggers ut where trigger_name = 'TRIG_EMP_VIEW'
3 /
TRIGGER_NAME TABLE_OWNER TABLE_NAME
------------------------------ ------------------------------
------------------------------
TRIG_EMP_VIEW KKISHORE EMP_VIEW
1 row selected.
SQL> create or replace view emp_view
2 as
3 select * from emp
4 /
View created.
SQL> select ut.trigger_name, ut.table_owner, ut.table_name
2 from user_triggers ut where trigger_name = 'TRIG_EMP_VIEW'
3 /
no rows selected
Followup:
the "or replace" is replacing the view and all related things. the create or
replace preserves grants -- not the triggers. it is a "new view"
====>> so what should I do if i have view's with instead of triggers became invalid? what syntax can I use to alter the view without my trigger disappearing?
if "create or replace" cannot be used, what syntax can i used?
Categories: DBA Blogs
CPU Utilization
Hi Tom,
We run a multi-user OLTP system on Exadata Quarter Rack (Linux version). Though the system response time is consistent and is performing well. We observed Run queues with CPU utilization at 70% on both the nodes. What could be the reason?
My understanding always has been that Run queues are formed only if the system utilization exceeds 100%. But in this case CPU on both the nodes is 65% utilized and 30% is free.
But may be my understanding is flawed.
Could you pls explain the concept of cpu utilization, run queues vis-avis cpu count, specially in OLTP workload?
Categories: DBA Blogs
Read consistency accross cursors in one procedure
I am looking for read consistency across multiple cursors in a packaged procedure. In the past I have opened the cursors that I wanted to be consistent at the start of the procedure, used them, and closed them at the end. I am starting to think that the time the first cursor takes to open, and resolve it's result set is making the subsequent cursor inconsistent, although this seems to have worked 99% of the time.
Example:
DECLARE
CURSOR Cur1 IS
SELECT SUM(X) FROM A WHERE SummaryFlag = 'N';
CURSOR Cur2 IS
SELECT ROWID FROM A WHERE SummaryFlag = 'N';
BEGIN
OPEN Cur1;
OPEN Cur2;
.
FOR Rows IN Cur1
UPDATE ASummary
.
.
FOR Rows IN Cur2
UPDATE A SET SummaryFlag = 'Y' WHERE RowId = Cur2.ROWID;
I have had a few occasions where the summary table does not contain the information that has now been flagged as summarized.
Does opening the cursors one right after the other guarantee a consistent result set, and if not why? Will using "ALTER TRANSACTION ISOLATION LEVEL SERIALIZABLE" fix this? How can I set my ISOLATION LEVEL and ROLLBACK segment at the same time?
Thanks in advance.
Categories: DBA Blogs
Receiving Webhook Events from Stripe Payment Processing
Here is my challenge. I am developing an application that receives webhook notifications when events occur. I have successfully used the restful services functionality in Apex (SQL Workshop>Restful Services) to retrieve data at the first (root?) level successfully. From the "request" sent from stripe below I can use paramters to retrieve the id, object, api_version, created, etc. but fail to retrieve the data.object.id or anything nested at a lower level. (apologies if I am using wrong descriptors here).
I have tried two approaches unsuccessfully:
1) a number of ways to identify the field as a parameter in the handler without success
2) retrieve the full json payload using :body, :body_text, :payload, :json_payload, etc.
Any guidance on how I could identify specific fields lower in the hierarchy (example: the data.object.id with value "cus_PfPbVdZHzvJq0E" below) as a parameter? Or, any guidance on how I could grab the full json payload?
Any guidance is appreciated.
Dwain
{
"id": "evt_1Oq4mjJ861pVT3w2L6jYiwce",
"object": "event",
"api_version": "2018-02-28",
"created": 1709432897,
"data": {
"object": {
"id": "cus_PfPbVdZHzvJq0E",
"object": "customer",
"account_balance": 0,
"address": null,
"balance": 0,
"created": 1709432896,
"currency": null,
"default_currency": null,
"default_source": null,
"delinquent": false,
"description": null,
"discount": null,
"email": "mike@dc.com",
"invoice_prefix": "2420987A",
"invoice_settings": {
"custom_fields": null,
"default_payment_method": null,
"footer": null,
"rendering_options": null
},
"livemode": false,
"metadata": {
},
"name": "mike",
"next_invoice_sequence": 1,
"phone": null,
"preferred_locales": [
],
"shipping": null,
"sources": {
"object": "list",
"data": [
],
"has_more": false,
"total_count": 0,
"url": "/v1/customers/cus_PfPbVdZHzvJq0E/sources"
},
"subscriptions": {
"object": "list",
"data": [
],
"has_more": false,
"total_count": 0,
"url": "/v1/customers/cus_PfPbVdZHzvJq0E/subscriptions"
},
"tax_exempt": "none",
"tax_ids": {
"object": "list",
"data": [
],
"has_more": false,
"total_count": 0,
"url": "/v1/customers/cus_PfPbVdZHzvJq0E/tax_ids"
},
"tax_info": null,
"tax_info_verification": null,
"test_clock": null
}
},
"livemode": false,
"pending_webhooks": 1,
"request": {
"id": "req_KtKtxAnXwioenZ",
"idempotency_key": "7263ed4a-0295-4a4e-a0b8-d7d3bf7f03b3"
},
"type": "customer.created"
}
Categories: DBA Blogs
How to call rest api which accept x-www-form-urlencoded in PL/SQL procedure in Apex
How to call rest api which accept
x-www-form-urlencoded in PL/SQL procedure in Apex
I am calling
https://api.textlocal.in/docs/sendsms
Categories: DBA Blogs
Error in pl/sql code
When i try to run this code:
DECLARE
STUDENT_ID NUMBER;
BEGIN
-- Generate the next value for the sequence
SELECT LMS_STUDENT_DETAILS_SEQ.nextval;
-- Insert data into LMS_STUDENT_DETAILS table
INSERT INTO LMS_STUDENT_DETAILS (STUDENT_ID, STUDENT_NAME, GENDER, DATE_OF_BIRTH, COURSE, CONTACT_NUMBER, DEPARTMENT)
VALUES (STUDENT_ID, :P6_STUDENT_NAME, :P6_GENDER, :P6_DOB, :P6_COURSE, :P6_CONTACT_NO, :P6_DEPARTMENT);
-- Insert data into LMS_BORROWER table
INSERT INTO LMS_BORROWER (BORROWER_ID, ENTITY_OWNER_FK, ENTITY_TYPE)
VALUES (LMS_BORROWER_SEQ.nextval, STUDENT_ID, 'STUDENT');
END;
I faced this error:
ORA-06550: line 1, column 106: PLS-00103: Encountered the symbol "DECLARE" when expecting one of the following: ( - + case mod new not null <an identifier> <a double-quoted delimited-identifier> <a bind variable> continue avg count current exists max min prior sql stddev sum variance execute forall merge time timestamp interval date <a string literal with character set specification> <a number> <a single-quoted SQL string> pipe <an alternatively-quoted string literal with character set
Categories: DBA Blogs
Mixed version dataguard
According Metalink note 785347.1 it seems possible to have a dataguard with primary 11.2 and standby 12.2 or even later but it is really very condensed.
Could you please just confirm that 11.2 -> 12.2 is really possible?
If so, what about 11.2 -> 19.x ?
Or 12.2 -> 19.x ?
Of course the idea is to upgrade to a later version with a very short downtime, after having switched to the newer version the old one would be discarded and the dataguard no longer used.
Best regards
Mauro
Categories: DBA Blogs
Is it a must to run pupbld.sql as system
Tom,
I create databases then I run the catalog.sql and catproc.sql. Sometimes, I donot run pupbld.sql. Users may get warning message but they could login and work.
But, My friend says that if pupbld.sql is not run as system then users will get the error messages and they cannot log into the database at all. Is it true.
Is it a must to run the pupbld.sql. I could not see in the documentation, whether it is a must. If, it is a must, how I am able to login.
Is this being called by anyother script like catalog.sql, catproc.sql. I grepped both the files for pupbld.sql. It does not exist.
Please clarify.
Regards
Ravi
Categories: DBA Blogs
How to update a user defined database package in production
I have a user defined database package which is used quite heavily. When I need to update the code body, I will get several
<code>ORA-04061: existing state of package body "CS.PACKAGE" has been invalidated
ORA-04065: not executed, altered or dropped package body "CS.PACKAGE"
ORA-06508: PL/SQL: could not find program unit being called: "CS.PACKAGE"
ORA-06512: at "CS.PROCEDURE", line 228</code>
We are using a connection pool. How do I put the changes into PACKAGE, without getting several of the above errors?
I cannot control the use of the package, and it is very heavily used.
Categories: DBA Blogs
Explain plan estimate vs actual
Hi,
I used explain plan and got the following results. Based on cost and time, does query 2 perform significantly better than query 1? The runtime for query 1 is approximately 1 minute and 40 seconds, but it shows 07:47:02. Why is the estimated time so different from the actual? Your help is much appreciated!
Query 1:
<code>------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 71730 | 2241K| 717M (1)| 07:47:02 |
|* 1 | TABLE ACCESS FULL| TBL1 | 71730 | 2241K| 717M (1)| 07:47:02 |
------------------------------------------------------------------------------</code>
Query 2:
<code>------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 71730 | 2241K| 51028 (1)| 00:00:02 |
|* 1 | TABLE ACCESS FULL| TBL1 | 71730 | 2241K| 51028 (1)| 00:00:02 |
------------------------------------------------------------------------------</code>
Categories: DBA Blogs
to_char a big number insert into database become scientific notation
Hi, Tom. Please see below script.
<code>create table t0326 (id number, num varchar2(100));
declare
v_empno number:=125854437665589038536841445202964995521300;
begin
dbms_output.put_line('v_empno -- ' || v_empno);
dbms_output.put_line('to_char(v_empno) -- '|| to_char(v_empno));
insert into t0326 values(10, to_char(v_empno));
commit;
end;
/
v_empno -- 125854437665589038536841445202964995521300
to_char(v_empno) -- 125854437665589038536841445202964995521300
select * from t0326;
ID NUM
---------- ------------------------------------------------------------
10 1.2585443766558903853684144520296500E+41
declare
v_empno number:=125854437665589038536841445202964995521300;
v_s_empno varchar2(100);
begin
v_s_empno := to_char(v_empno);
dbms_output.put_line('v_empno -- ' || v_empno);
dbms_output.put_line('to_char(v_empno) -- '|| to_char(v_empno));
dbms_output.put_line('v_s_empno -- '|| v_s_empno);
insert into t0326 values(20, to_char(v_empno));
insert into t0326 values(30, v_s_empno);
insert into t0326 values(40, to_char(v_empno, 'FM999999999999999999999999999999999999999999999999999999999'));
commit;
end;
/
v_empno -- 125854437665589038536841445202964995521300
to_char(v_empno) -- 125854437665589038536841445202964995521300
v_s_empno -- 125854437665589038536841445202964995521300
select * from t0326;
ID NUM
---------- -----------------------------------------------------------------------
10 1.2585443766558903853684144520296500E+41
20 1.2585443766558903853684144520296500E+41
30 125854437665589038536841445202964995521300
40 125854437665589038536841445202964995521300 </code>
It display normal when "to_char(v_empno)" in dbms_output.put_line. But insert to database convert to scientific notation.
I try two solutions to solve this problem. Please see below.
1. use a variable to store to_char(v_empno), then insert this varaible to database.
2. use to_char(xx, FMT) to control the format.
I wonder why "to_char(v_empno)" in dbms_output.put_line is not scientific notation ? why add a temp variable could solve this problem ?
Categories: DBA Blogs
Catastrophic Database Failure -- Deletion of Control and Redo Files
We recently had a database failure that resulted in data loss after an Oracle 19.3.0.0.0 database had both both its control, and redo log files deleted. Please note that I am not a DBA, but simply an analyst that supports the system that sits on this Oracle database. Any amount of data loss is fairly serious, and I am wondering how we avoid this in the future.
Before the control, and redo files were deleted, we had an event wherein the drive this database is on was full. This caused the database stop writing transactions, and disallowed users from accessing the application. Once space was made on this drive, the database operated normally for several hours until...the redo, and control files were deleted.
What would have caused the control, and redo files to be deleted?
In trying to figure out what happened, it was noted that if we had expanded the drive's memory in response to its becoming full, the later data loss would not have happened. Does Tom agree with that sentiment? Are these two events linked (disk drive nearly full and later data loss), or are they symptomatic of two different things?
Categories: DBA Blogs
Is 'SELECT * FROM :TABLE_NAME;' available?
Is 'SELECT * FROM :TABLE_NAME;' available?
Categories: DBA Blogs
Is fragmentation an issue ?
Hai all,
I have 1000 number of tables. some of the tables got delete rows and updated the fragmentaion is created.
How to determine which tables are fragmented ?
Categories: DBA Blogs
19c doesn't allow truncation of data that is longer in length of column's char(40) definition
We have an application that has been written to insert a variable that is char(50) into a column that is defined as char(40). In Oracle 11g (I know this is very old) it would merely truncate the last 10 characters without issue. However, Oracle 19c doesn't allow this and raises an exception (which I believe should've always been the case). Where can I find documentation of this restriction and when it was changed and is there away around this other than changing the program code?
Oracle 11 truncated that extra 10 characters in the below statemt
ADBBGNX_ADDRESS_LINE_1 := agentrecord.producerrec.businessAddressLine1;
Oracle 19 throws an exception with a NULL error status.
Categories: DBA Blogs
Returning data in EXECUTE IMMEDIATE with dynamic values in USING clause
Hi Team
I have below scenario.
Step#1) User clicks to particular App UI screen.
Step#2) User selects multiple filters on UI - say filter1, filter2 which correspond to table columns.
Step#3) For each filter selected by user, he needs to enter data - say Mark (for filter1), Will (for filter2) based on which search will be performed on the respective filters (aka table columns).
Step#4) User inputs from above Steps#2, 3 are passed to PLSQL API which returns desired SQL result in paginated manner (pageSize: 50).
User inputs from Step#2, 3 will be dynamic.
I have tried to implement this using native dynamic SQL, but looks like I have hit an end road here. Able to use dynamic values in "USING" clause, but not able to return the data from SELECT statement with EXECUTE IMMEDIATE.
Shared above LiveSQL link which has re-producible test case.
If I comment line "BULK COLLECT INTO l_arId, l_arName, l_arType" in the procedure, the block executes successfully.
But I need the result set from SELECT statement in procedure as output.
Looking for some advise here.
Thanks a bunch!
Categories: DBA Blogs
Restore from Archivelog only
Hello Sir,
I am able to get one scenario to work and that scenario was where I had a VM (server) running Oracle 19c with just 1 table 5 records and I did a backup of the whole VM (disk backup) and now I added a new table in my db with 3 records (ensured db is in Archivelog mode) and then I ran:
rman target /
backup database plus archivelog;
Now I went ahead and added 2 more records and noted the system time lets say **2024-04-06 15:33:55 ** (so I can restore upto this time). So basically a new table with 5 records. Once all this done I ran below command:
backup incremental level 1 database plus archivelog;
Now I deleted my VM and restored the first Old copy of my VM backup (one that had 1 table n 5 records), post this VM restore. I followed the steps below and I was able to get Point in time recovery to work up to 2024-04-06 15:33:55 (here now I should have 2 tables each with 5 records each). The main step which I had missed earlier was RESTORING the control file since I was doing restore on a different (new VM) server:
sql>> shutdown abort;
rman<code>>> startup nomount;
rman >>RESTORE CONTROLFILE FROM "/mnt/orabkup1/snapcf_ev.f";
rman>> startup mount;
rman >>run
{
SET UNTIL TIME "TO_DATE('2024-04-06 15:33:55', 'YYYY-MM-DD HH24:MI:SS')";
RESTORE DATABASE;
RECOVER DATABASE;
sql `ALTER DATABASE OPEN RESETLOGS?;
}</code>
Everything good here and with this approach I was able to get the Point in time recovery to work. I was missing that restore of the control file.
Now the scenario which I am still not able to work out and I am sure I am making a very basic mistake (may be I dont understand the archivelog and redolog properly).
The scenario I want to make work is : I have VM backup (disk backup) upto a level of 1 Table and 5 records. Then I create 2nd table and add lets say 2 records to it and this time I only take ARCHIVELOG backup and then add 3 more records and then backup incremental archivelog all and I note the time (lets assume '2024-04-06 15:33:55) with following steps:
<code>
backup archivelog all;
insert into xxx VALUES(3,'Line 1');
insert into xxx VALUES(4,'Line 1');
commit;
backup incremental level 1 archivelog all</code>;
Here I have not done backup database plus archivelog (assuming all those new inserts would be in redolog and may be in archivelog?). Now I delete this VM and restore from disk backups a new VM from backup1 (where only 1 table 5 records) exists and now I simply run following:
<code>shutdown abort;
startup nomount;
RESTORE CONTROLFILE FROM "/mnt/orabkup1/snapcf_ev.f";
startup mount;
run
{
SET UNTIL TIME "TO_DATE('2024-04-06 15:33:55', 'YYYY-MM-DD HH24:MI:SS')";
RESTORE ARCHIVELOG all ;
RECOVER DATABASE;
sql `ALTER DATABASE OPEN RESETLOGS?;
}
</code>
But unfortunately it complains about ORA-01194: file 1 needs more recovery to be consistent
ORA-01110: data file 1: 'D:\BASE\MYDB\DATA\SYSTEM01.DBF'
Not sure why this, as I was thinking that Arc...
Categories: DBA Blogs
Does Migrating 4k Tablespace block size to 8k database cause performance impact ?
I am migrating 11g database cross endianness from on-prem to EXACS . On-prem database db_block_size is 4k and all the tablespaces are also of 4k block size . <u>Since, I cannot provision non-standard block size database in OCI</u> , I am worried about the performance impact caused by different block size. Please help me understand what database block size would be recommended for the below scenario.
<code>
-----------------------------------------------------------
Source : ON_PREM
-----------------------------------------------------------
Platform / ID : AIX-Based Systems (64-bit) / 6
Version : 11.2.0.4.0
Size (GB) : 17 TB
db_block_size : 4k
All Tablespaces BLK Size : 4k
-----------------------------------------------------------
Target : OCI - EXACS
-----------------------------------------------------------
Platform / ID : LINUX / 13
Version : 11.2.0.4.0
Size (GB) : 17 TB
db_block_size : 8K
APP Tablespaces BLK Size : 4k
SYSTEM/SYSAUX/TEMP/UNDO : 8K
</code>
Phase 1: Migrating from AIX 11g to EXACS 11g
Phase 2: 19c upgrade and Multi tenant {<i>Due to business requirement we have to split migration and upgrade</i>}
<b>Question : </b>
1. Can we guarantee that there will be no performance impact due to difference in tablespace and database block size if db_4k_cache_size parameter is set adequately to large value .
2. Or Better to go for same 4k block size as source on-premises database.
Off course application regression testing and RAT will be included , but testing both cases is not feasible, hence reaching for expert advice .
Categories: DBA Blogs