Feed aggregator

Sql2Odi now supports the WITH statement

Rittman Mead Consulting - 14 hours 32 min ago

The Rittman Mead's Sql2Odi tool that converts SQL SELECT statements to ODI Mappings, now supports the SQL WITH statement as well. (For an overview of our tool's capabilities, please refer to our blog posts here and here.)

The Case for WITH Statements

If a SELECT statement is complex, in particular if it queries data from multiple source tables and relies on subqueries to do so, there is a good chance that rewriting it as a WITH statement will make it easier to read and understand. Let me show you what I mean...

SELECT 
  LAST_NAME,
  FIRST_NAME,
  LAST_NAME || ' ' || FIRST_NAME AS FULL_NAME,
  AGE,
  COALESCE(LARGE_CITY.CITY, ALL_CITY.CITY) CITY,
  LARGE_CITY.POPULATION
FROM 
  ODI_DEMO.SRC_CUSTOMER CST
  INNER JOIN ODI_DEMO.SRC_CITY ALL_CITY ON ALL_CITY.CITY_ID = CST.CITY_ID  
  LEFT JOIN (
    SELECT 
      CITY_ID,
      UPPER(CITY) CITY,
      POPULATION
    FROM ODI_DEMO.SRC_CITY
    WHERE POPULATION > 750000
  ) LARGE_CITY ON LARGE_CITY.CITY_ID = CST.CITY_ID  
WHERE AGE BETWEEN 25 AND 45

This is an example from my original blog posts. Whilst one could argue that the query is not that complex, it does contain a subquery, which means that the query does not read nicely from top to bottom - you will likely need to look at the subquery first for the master query to make sense to you.

Same query, rewritten as a WITH statement, looks like this:

WITH 
BASE AS (
SELECT 
  LAST_NAME,
  FIRST_NAME,
  LAST_NAME || ' ' || FIRST_NAME AS FULL_NAME,
  AGE,
  CITY_ID
FROM 
  ODI_DEMO.SRC_CUSTOMER CST
),
LARGE_CITY AS (
  SELECT 
    CITY_ID,
    UPPER(CITY) CITY,
    POPULATION
  FROM ODI_DEMO.SRC_CITY
  WHERE POPULATION > 750000
),
ALL_DATA AS (
  SELECT
    LAST_NAME,
    FIRST_NAME,
    FULL_NAME,
    AGE,
    COALESCE(LARGE_CITY.CITY, ALL_CITY.CITY) CITY,
    LARGE_CITY.POPULATION
  FROM
    BASE CST
    INNER JOIN ODI_DEMO.SRC_CITY ALL_CITY ON ALL_CITY.CITY_ID = CST.CITY_ID
    LEFT JOIN LARGE_CITY ON LARGE_CITY.CITY_ID = CST.CITY_ID  
  WHERE AGE BETWEEN 25 AND 45
)
SELECT * FROM ALL_DATA

Whilst it is longer, it reads nicely from top to bottom. And the more complex the query, the more the comprehensibility will matter.

The first version of our Sql2Odi tool did not support WITH statements. But it does now.

Convert a WITH Statement to an ODI Mapping

The process is same old - first we add the two statements to our metadata table, add some additional data to it, like the ODI Project and Folder names, the name of the Mapping, the Target table that we want to populate and how to map the query result to the Target table, names of Knowledge Modules and their config, etc.

After running the Sql2Odi Parser, which now happily accepts WITH statements, and the Sql2Odi ODI Content Generator, we end up with two mappings:

What do we see when we open the mappings?

The original SELECT statement based mappings is generated like this:

The new WITH statement mapping, though it queries the same data in pretty much the same way, is more verbose:

The additional EXPRESSION components are added to represent references to the WITH subqueries. While the mapping is now busier than the original SELECT, there should be no noticeable performance penalty. Both mappings generate the exact same output.

Categories: BI & Warehousing

Are You Planning to Move Your Car from Oregon to Idaho?

OraQA - Mon, 2021-08-02 07:44

If you are up-to-date with the latest news then you must be aware that 5 rural Oregon counties have recently voted in favor of moving permanently to Idaho for joining a much more conservative political environment.

These counties are Sherman, Malheur, Baker, Grant, and Lake counties where all voters have passed a measure and due to that county officials have to discuss about moving the Idaho border to the west, and also incorporate their population. 

Now all these counties would join Jefferson and Union counties in Idaho. The majority of industries in these counties who voted to join Idaho, mostly are from the timber, mining, farming, and trucking industry.

As a result many people from Oregon may decide to move to Idaho very soon and there will be too much of a rush to ship their cars too. If you are looking for the best way to ship a car from Oregon to Idaho then you must consider service from Ship a Car, Inc.

SAC can deliver superior shipping service for those who want to relocate their vehicles, due to recent political development, as most of the businesses are relocating their business along with their heavy haul equipment. 

SAC is one of the most experienced transport brokers having direct access to most of the network of carriers. All that you need to do is give a simple call and everything will fall back as you would like. While availing of a service from Ship A Car, besides transporting your cars safely you will also have complete peace of mind. 

How to prepare yourself for shipping your car?

When you are going to book for any car transport services to Idaho, then you must keep the following few key things in your mind.

  • Inquire about the accepted payment methods

Usually, the acceptable payment forms of different shipping companies may not be the same. There are lots of variations from company to company. Therefore, before you try to book their services, you must inquire about their methods of payment.

This will avoid any confusion during the last moment when you go for booking. Some companies may demand certain advance money during the booking while a few companies take the full payment after the delivery of the car at the destination place. 

  • Request for GPS tracking of your car

Most companies nowadays offer the tracking option when your car will be shipped. Most people may offer you a GPS tracking system while few other companies may provide you the contact number of the driver so that you can call and find the present position of your vehicle. 

  • Do proper inspection while delivering your car

As you deliver your car while booking, make sure that you have thoroughly inspected your car. Take the photographs from several angles and also give a copy of your inspection report to the transporter.  

Make sure that you also inspect your car while receiving it at the destination. Check for any blemishes in your paint, dents or bumps, or anything that didn’t exist before while handing over the car.

The post Are You Planning to Move Your Car from Oregon to Idaho? appeared first on ORA QA.

Dockerfile and Docker Compose Tutorial for MLOps

Andrejus Baranovski - Mon, 2021-08-02 06:19
This is the tutorial, where I talk about MLOps, explain the difference between Dockerfile and Docker Compose YML definition file. I briefly explain what you should be aware of if planning to move your solution to Kubernetes in the future. I explain in simple words, what is Dockerfile and when Docker Compose is useful. The sample service is based on TensorFlow functionality, where we call model predict function to process serving request.

 

Azure SQL Deployment Options | SQL Managed Instance | SQL Database| SQL On VM

Online Apps DBA - Sat, 2021-07-31 09:09

Azure Database Administrator is responsible to implements and managing the operational aspects of cloud-native and hybrid data platform solutions built on AzureData Services and SQL Server. The Azure Database Administrator uses a variety of methods and tools to perform day-to-day operations, including applying knowledge of using T-SQL for administrative management purposes. Within the Azure SQL […]

The post Azure SQL Deployment Options | SQL Managed Instance | SQL Database| SQL On VM appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Ansible Tutorial for Beginners Live Training

Online Apps DBA - Sat, 2021-07-31 08:34

#Automation: In simple words, Automation has become known more as using machines to reduce work performed by humans. In other words, it frees up time and increases efficiency. # Ansible: Ansible is an open-source automation tool with a simple automation language that can perfectly describe IT application environments in Ansible Playbooks. Ansible provides reliability, consistency, […]

The post Ansible Tutorial for Beginners Live Training appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Azure Synapse SQL Vs Dedicated SQL Pool Vs Serverless SQL Pool Vs Apache Spark Pool

Online Apps DBA - Fri, 2021-07-30 05:18

➤ Azure Synapse Analytics? Azure Synapse Analytics is an analytics service that helps in data integration, data warehousing, and big data analytics. ➤ Azure Synapse SQL Azure Synapse SQL is a big data analytic service to query and analyze data. It is distributed query system enabling data warehousing and data virtualization. ➤ Apache Spark Apache […]

The post Azure Synapse SQL Vs Dedicated SQL Pool Vs Serverless SQL Pool Vs Apache Spark Pool appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Generating a random number of rows for every date within a date range

Tom Kyte - Thu, 2021-07-29 19:46
I have some working SQL below that generates a row for each employee_id. My goal is to get every date in the range via the function, which works fine standalone, then get N ( random number ( 1-10) of rows for each employee_id for every in the range specified. Once the SQLworks I intend to put this code in a procedure so I can pass it a range of dates. So we can assure we are both running the same version of Oracle I tested this on live SQL. Below is some sample output for a single day only. Please note the employee_id and location_id must exist in their corresponding tables. Since my function call always generates dates with a time of 00:00:00 I plan on eventually adding time to the access_date. <code>EMPLOYEE_ID CARD_NUM LOCATION_ID ACCESS_DATE 1 F123456 10 07302021 09:47:48 1 F123456 5 07282021 19:17:42 2 R33432 4 07282021 02:00:37 3 C765341 2 07282021 17:33:57 3 C765341 6 07282021 17:33:57 3 C765341 1 07282021 18:53:07 4 D564311 6 07282021 03:06:37 ALTER SESSION SET NLS_DATE_FORMAT = 'MMDDYYYY HH24:MI:SS'; CREATE OR REPLACE TYPE nt_date IS TABLE OF DATE; / CREATE OR REPLACE FUNCTION generate_dates_pipelined( p_from IN DATE, p_to IN DATE ) RETURN nt_date PIPELINED DETERMINISTIC IS v_start DATE := TRUNC(LEAST(p_from, p_to)); v_end DATE := TRUNC(GREATEST(p_from, p_to)); BEGIN LOOP PIPE ROW (v_start); EXIT WHEN v_start >= v_end; v_start := v_start + INTERVAL '1' DAY; END LOOP; RETURN; END generate_dates_pipelined; / Create table employees( employee_id NUMBER(6), first_name VARCHAR2(20), last_name VARCHAR2(20), card_num VARCHAR2(10), work_days VARCHAR2(7) ); ALTER TABLE employees ADD ( CONSTRAINT employees_pk PRIMARY KEY (employee_id) ); INSERT INTO employees ( EMPLOYEE_ID, first_name, last_name, card_num, work_days ) WITH names AS ( SELECT 1, 'Jane', 'Doe', 'F123456', 'NYYYYYN' FROM dual UNION ALL SELECT 2, 'Madison', 'Smith', 'R33432','NYYYYYN' FROM dual UNION ALL SELECT 3, 'Justin', 'Case', 'C765341','NYYYYYN' FROM dual UNION ALL SELECT 4, 'Mike', 'Jones', 'D564311','NYYYYYN' FROM dual ) SELECT * FROM names; CREATE TABLE locations AS SELECT level AS location_id, 'Door ' || level AS location_name, CASE round(dbms_random.value(1,3)) WHEN 1 THEN 'A' WHEN 2 THEN 'T' WHEN 3 THEN 'G' END AS location_type FROM dual CONNECT BY level <= 10; ALTER TABLE locations ADD ( CONSTRAINT locations_pk PRIMARY KEY (location_id)); SELECT e.employee_id, e.card_num, l.location_id, c.access_date, e.rn, l.rn, c.rn FROM ( SELECT employee_id, round ( dbms_random.value ( 1, 10 ) ) rn, card_num FROM employees ) e INNER JOIN ( SELECT location_id, row_number() OVER (ORDER BY dbms_random.value) AS rn FROM locations ) l ON (e.rn = l.rn) INNER JOIN ( SELECT COLUMN_VALUE AS access_date, row_number() OVER (ORDER BY dbms_random.value) AS rn FROM TABLE(generate_dates_pipelined(SYSDATE, ADD_MONTHS(SYSDATE, 1))) ) c ON (e.rn >= c.rn) ORDER BY employee_id, location_id;</code>
Categories: DBA Blogs

Joining Data in OAC

Rittman Mead Consulting - Thu, 2021-07-29 08:41

One of the new features in OAC 6.0 was Multi Table Datasets, which provides another way to join tables to create a Data Set.

We can already define joins in the RPD, use joins in OAC’s Data Flows and join Data Sets using blending in DV Projects, so I went on a little quest to compare the pros and cons of each of the methods to see if I can conclude which one works best.

What is a data join?

Data in databases is generally spread across multiple tables and it is difficult to understand what the data means without putting it together. Using data joins we can stitch the data together, making it easier to find relationships and extract the information we need. To join two tables, at least one column in each table must be the same. There are four available types of joins I’ll evaluate:

1.    Inner join - returns records that have matching values in both tables. All the other records are excluded.

2.    Left (outer) join - returns all records from the left table with the matched records from the right table.

3.    Right (outer) join - returns all records from the right table with the matched records from the left table.

4.    Full (outer) join - returns all records when there is a match in either left or right tables.

Each of the three approaches give the developer different ways and places to define the relationship (join) between the tables. Underpinning all of the approaches is SQL. Ultimately, OAC will generate a SQL query that will retrieve data from the database, so to understand joins, let’s start by looking at SQL Joins

SQL Joins

In an SQL query, a JOIN clause is used to execute this function. Here is an example:

SELECT EMP.id, EMP.name, DEPT.stream
FROM EMP
INNER JOIN DEPT ON DEPT.id = EMP.id;  
Figure 1 - Example of an inner join on a sample dataset.

Now that we understand the basic concepts, let’s look at the options available in OAC.

Option 1: RPD Joins

The RPD is where the BI Server stores its metadata. Defining joins in the RPD is done in the Admin Client Tool and is the most rigorous of the join options. Joins are defined during the development of the RPD and, as such, are subject to the software development lifecycle and are typically thoroughly tested.

End users access the data through Subject Areas, either using classic Answers and Dashboards, or DV. This means the join is automatically applied as fields are selected, giving you more control over your data and, since the RPD is not visible to end-users, avoiding them making any incorrect joins.

The main downside of defining joins in the RPD is that it’s a slower process - if your boss expects you to draw up some analyses by the end of the day, you may not make the deadline using the RPD. It takes time to organise data, make changes, then test and release the RPD.

Join Details

The Admin Client Tool allows you to define logical and physical tables, aggregate table navigation, and physical-to-logical mappings. In the physical layer you define primary and foreign keys using either the properties editor or the Physical Diagram window. Once the columns have been mapped to the logical tables, logical keys and joins need to be defined. Logical keys are generally automatically created when mapping physical key columns. Logical joins do not specify join columns, these are derived from the physical mappings.

You can change the properties of the logical join; in the Business Model Diagram you can set a driving table (which optimises how the BI Server process joins when one table is smaller than the other), the cardinality (which expresses how rows in one table are related to rows in the table to which it is joined), and the type of join.

Driving tables only activate query optimisation within the BI Server when one of the tables is much smaller than the other. When you specify a driving table, the BI Server only uses it if the query plan determines that its use will optimise query processing. In general, driving tables can be used with inner joins, and for outer joins when the driving table is the left table for a left outer join, or the right table for a right outer join. Driving tables are not used for full outer joins.

The Physical Diagram join also gives you an expression editor to manually write SQL for the join you want to perform on desired columns, introducing complexity and flexibility to customise the nature of the join. You can define complex joins, i.e. those over non-foreign key and primary key columns, using the expression editor rather than key column relationships. Complex joins are not as efficient, however, because they don’t use key column relationships.

Figure 3 - Business Model Diagram depicting joins made between three tables in the RPD.

It’s worth addressing a separate type of table available for creation in the RPD – lookup tables. Lookup tables can be added to reference both physical and logical tables, and there are several use cases for them e.g., pushing currency conversions to separate calculations. The RPD also allows you to define a logical table as being a lookup table in the common use case of making ID to description mappings.

Lookup tables can be sparse and/or dense in nature. A dense lookup tables contains translations in all languages for every record in the base table. A sparse lookup table contains translations for only some records in the base tables. They can be accessed via a logical calculation using DENSE or SPARSE lookup function calls. Lookup tables are handy as they allow you to model the lookup data within the business model; they’re typically used for lookups held in different databases to the main data set.

Multi-database joins allow you to join tables from across different databases. Even though the BI Server can optimise the performance of multi-database joins, these joins are significantly slower than those within a single database.

Option 2: Data Flow Joins

Data Flows provide a way to transform data in DV. The data flow editor gives us a canvas where we can add steps to transform columns, merge, join or filter data, plot forecasts or apply various models on our datasets.

When it comes to joining datasets, you start by adding two or more datasets. If they have one or more matching columns DV automatically detects this and joins them; otherwise, you have to manually add a ‘Join’ step and provided the columns’ data types match, a join is created.

A join in a data flow is only possible between two datasets, so if you wanted to join a third dataset you have to create a join between the output of the first and second tables and the third, and so on. You can give your join a name and description which would help keep track if there are more than two datasets involved. You can then view and edit the join properties via these nodes created against each dataset. DV gives you the standard four types of joins (Fig. 1), but they are worded differently; you can set four possible combinations for each input node by toggling between ‘All rows’ and ‘Matching rows’. That means:

Join type

Node 1

Node 2

Inner join

Matching rows

Matching rows

Left join

All rows

Matching rows

Right join

Matching rows

All rows

Full join

All rows

All rows

The table above explains which type of join can be achieved by toggling between the two drop-down options for each dataset in a data flow join.

Figure 4 - Data flow editor where joins can be made and edited between two datasets.

It’s worth mentioning there is also an operator called ‘Union Rows’. You can concatenate two datasets, provided they have the same number of columns with compatible datatypes. There are a number of options to decide how you want to combine the rows of the datasets.

Figure 5 - Union Rows operator displaying how the choices made impact columns in the data.

One advantage of data flows is they allow you to materialise the data i.e. save it to disk or a database table. If your join query takes 30 minutes to run, you can schedule it to run overnight and then reports can query the resultant dataset.

However, there are limited options as to the complexity of joins you can create:

  1. the absence of an expression editor to define complex joins
  2. you cannot join more than two datasets at a time.

You can schedule data flows which would allow you to materialise the data overnight ready for when users want to query the data at the start of the day.

Data Flows can be developed and executed on the fly, unlike the longer development lifecycle of the RPD.

It should be noted that Data Flows cannot be shared. The only way around this is to export the Data Flow and have the other user import and execute it. The other user will need to be able to access the underlying Data Sets.

Option 3: Data Blending

Before looking at the new OAC feature, there is a method already present for cross-database joins which is blending data.

Given at least two data sources, for example, a database connection and an excel spreadsheet from your computer, you can create a Project with one dataset and add the other Data Set under the Visualise tab. The system tries to find matches for the data that’s added based on common column names and compatible data types. Upon navigating back to the Data tab, you can also manually blend the datasets by selecting matching columns from each dataset. However, there is no ability to edit any other join properties.

Figure 6 - Existing method of blending data by matching columns from each dataset.Option 4: Multi Table Datasets

Lastly, let’s look at the newly added feature of OAC 6.0: Multi Table Datasets. Oracle have now made it possible to join several tables to create a new Data Set in DV.

Historically you could create Data Sets from a database connection or upload files from your computer. You can now create a new Data Set and add multiple tables from the same database connection. Oracle has published a list of compatible data sources.

Figure 7 - Data Set editor where joins can be made between numerous datasets and their properties edited.

Once you add your tables DV will automatically populate joins, if possible, on common column names and compatible data types.

The process works similarly to how joins are defined in Data Flows; a pop-up window displays editable properties of the join with the same complexity - the options to change type, columns to match, the operator type relating them, and add join conditions.

Figure 8 - Edit window for editing join properties.

The data preview refreshes upon changes made in the joins, making it easy to see the impact of joins as they are made.

Unlike in the RPD, you do not have to create aliases of tables in order to use them for multiple purposes; you can directly import tables from a database connection, create joins and save this Multi Table Dataset separately to then use it further in a project, for example. So, the original tables you imported will retain their original properties.

If you need to write complex queries you can use the Manual SQL Query editor to create a Data Set, but you can only use the ‘JOIN’ clause.

Figure 9 - Manual Query editor option under a set up database connection.So, what’s the verdict?

Well, after experimenting with each type of joining method and talking to colleagues with experience, the verdict is: it depends on the use case.

There really is no right or wrong method of joining datasets and each approach has its pros and cons, but I think what matters is evaluating which one would be more advantageous for the particular use case at hand.

Using the RPD is a safer and more robust option, you have control over the data from start to end, so you can reduce the chance of incorrect joins. However, it is considerably slower and make not be feasible if users demand quick results. In this case, using one of the options in DV may seem more beneficial.

You could either use Data Flows, either scheduled or run manually, or Multi Table Datasets. Both approaches have less scope for making complex joins than the RPD. You can only join two Data Sets at a time in the traditional data flow, and you need a workaround in DV to join data across database connections and computer-uploaded files; so if time and efficiency is of essence, these can be a disadvantage.

l would say it’s about striking a balance between turnaround time and quality - of course both good data analysis in good time is desirable, but when it comes to joining datasets in these platforms it will be worth evaluating how the use case will benefit from either end of the spectrum.

Categories: BI & Warehousing

Microsoft Data Analyst Associate [DA-100] Training | Day 4 Q/A Review

Online Apps DBA - Thu, 2021-07-29 03:19

This blog post will give a quick insight into our Microsoft Data Analyst Associate [DA-100] Class (Day 4). The DA-100 Certification enables Data Analysts to attain a good quality in their data assets with the assistance of Microsoft Power BI On our Day 4 Live Session, we have covered Module 5: Create Model Calculations using […]

The post Microsoft Data Analyst Associate [DA-100] Training | Day 4 Q/A Review appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

AWS Certified DevOps Professional | Day 2 Q/A Review

Online Apps DBA - Thu, 2021-07-29 02:11

The blog post – https://k21academy.com/awsdevopsday2 will cover the Q/A’s from Day 2 of AWS DevOps Professional in which we have covered Module 2: Introduction To DevOps. This blog will help you to get started with AWS Certified DevOps Engineer Professional [DOP-C01]. This blog post will help you to increase your knowledge about AWS Certified DevOps […]

The post AWS Certified DevOps Professional | Day 2 Q/A Review appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Microsoft Data Analyst Associate [DA-100] Training | Day 3 Q/A Review

Online Apps DBA - Thu, 2021-07-29 01:56

This blog post will give a quick insight into our Microsoft Data Analyst Associate [DA-100] Class (Day 3) The DA-100 Certification enables Data Analysts to attain a good quality in their data assets with the assistance of Microsoft Power BI On our Day 3 Live Session, we have covered Work with tables, create date tables, […]

The post Microsoft Data Analyst Associate [DA-100] Training | Day 3 Q/A Review appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Day 3 [Google Professional Cloud Architect] Q/A: Virtual Private Cloud and Introduction to Cloud IAM

Online Apps DBA - Thu, 2021-07-29 01:37

Google Cloud Platform is one of the fast-growing cloud service providers in the cloud computing market and Google Professional Cloud Architect is one of the most popular GCP certifications. A Google Cloud Architect is responsible for designing, implementing, and managing cloud solutions on GCP using various services like Compute, Networking, Databases, Storage, Big Data, Management, […]

The post Day 3 [Google Professional Cloud Architect] Q/A: Virtual Private Cloud and Introduction to Cloud IAM appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Stats_mode function - Deterministic or Non- Deterministic

Tom Kyte - Thu, 2021-07-29 01:26
Stats_mode function - Deterministic or Non- Deterministic? What does Oracle return when there are multiple keys with same mode (highest) and how?
Categories: DBA Blogs

Microsoft Data Analyst Associate [DA-100] Training | Day 2 Q/A Review

Online Apps DBA - Thu, 2021-07-29 00:38

This blog post will give a quick insight into our Microsoft Data Analyst Associate [DA-100] Class (Day 2) The DA-100 Certification enables Data Analysts to attain a good quality in their data assets with the assistance of Microsoft Power BI On our Day 2 Live Session, we have covered Module 3: Clean, Transform and Load […]

The post Microsoft Data Analyst Associate [DA-100] Training | Day 2 Q/A Review appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Azure Data Engineer [DP203] Q/A | Day 6 Live Session Review

Online Apps DBA - Thu, 2021-07-29 00:11

The blog post – https://k21academy.com/dp203day6 will cover the Q/A’s from Day 6 of Microsoft Azure Data Engineering [Dp-203] Certification in which we have covered Module 9: Orchestrate data movement and transformation in Azure Synapse Pipelines & Module 10: Optimize query performance with dedicated SQL pools in Azure Synapse FAQs. This blog will help you to […]

The post Azure Data Engineer [DP203] Q/A | Day 6 Live Session Review appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

using explain plan

Tom Kyte - Wed, 2021-07-28 07:06
How should I use the EXPLAIN Plan for tuning of my SQL Statements .Kindly advice me on the sequence of steps to be followed . Some live examples could also be very helpful. With Regards. Ramesh.S
Categories: DBA Blogs

Oracle TDE - AES encryption mode of operation

Tom Kyte - Wed, 2021-07-28 07:06
Product: Oracle Database 19c Transparent Data Encryption (TDE) From the Chapter 10 of Advanced Security Guide, we know for the supported block ciphers "table keys are used in cipher block chaining (CBC) operating mode, and the tablespace keys are used in cipher feedback (CFB) operating mode." https://docs.oracle.com/en/database/oracle/oracle-database/19/asoag/frequently-asked-questions-about-transparent-data-encryption.html Question 1: Both modes of operation require an Initialization Vector to be specified however TDE does not allow the DBA to specify an IV. What IV does TDE actually use? Is it psuedorandom or a fixed value such as all zeros? Question 2: If the IV is fixed, it would leak information, e.g. for CBC mode it makes it deterministic, so the same plaintext always maps to the same ciphertext. So, it is possible to enhance TDE to allow an IV to be specified in the same way that DBMS_CRYPTO currently does? Thanks
Categories: DBA Blogs

AWS Certified DevOps Professional | Day 3 Q/A Review

Online Apps DBA - Tue, 2021-07-27 00:49

The blog post – https://k21academy.com/awsdevopsday3 will cover the Q/A’s from Day 3 of AWS DevOps Professional in which we have covered Module 3: SDLC Automation. This blog will help you to get started with AWS Certified DevOps Engineer Professional [DOP-C01]. The blog post will help you to increase your knowledge about AWS Certified DevOps Engineer […]

The post AWS Certified DevOps Professional | Day 3 Q/A Review appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

how to transfer client files to DB server using PL/SQL

Tom Kyte - Mon, 2021-07-26 18:26
Hi Tom, Is there a way to transfer a file from client machine to DB server using PL/SQL? If this is possible, can you show me an example? Thank you very much. Amy
Categories: DBA Blogs

Google Cloud Pricing Calculation

Online Apps DBA - Mon, 2021-07-26 06:08

➤ Google Cloud stands out among cloud providers because of its cutting-edge tools and services. In 2021, Gartner named Google Cloud the top in the IaaS Magic Quadrant. ➤ Google Cloud is a collection of Google’s cloud computing services. The platform offers a variety of services, including computing, storage, networking, Big Data, and others, all […]

The post Google Cloud Pricing Calculation appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Pages

Subscribe to Oracle FAQ aggregator