Feeds:
Posts
Comments

Hello, lately I haven’t contributed to my site because I am working on projects with Azure, Azure databricks.

Anyways, I am analyzing the size of an on premise oracle table vs the copy of it on the azure blob storage.

I ingested an oracle (v19c) table using Azure ADF as a parquet file on Azure Blob storage. I found that the file compressed over 80% in size. I didn’t filter any rows.

I have not seen any articles to validate that it will compress an oracle table over 80%.

What have you seen so far? Add your feedback

Thanks

 

Here I explain how to enable and configure email notification for DB Scheduler Jobs.

First –

DB account requires you to Enable Scheduler with email Server.

What is Required: your company’s email server.

Once you have this information, executing following PL/SQL block enables Scheduler job to send emails. (notice it just enables , doesn’t send emails yet)

BEGIN
  DBMS_SCHEDULER.SET_SCHEDULER_ATTRIBUTE
    (attribute => 'email_server'
     ,value     => 'smtp.company.com');
END;
/

Remember Pre-Requisite: In order to execute above code, this DB account requires grants on ‘manage scheduer’

Grant manage scheduler to DB_ACCT

 

Second –

To send email after particular job is successfully completed or failed, this notification should be enabled for that specific job using following code.

BEGIN
 DBMS_SCHEDULER.add_job_email_notification (
  job_name   =>  '<JOB_NAME>',
  recipients =>  'testing@company.com',
  events     =>  'job_succeeded, JOB_FAILED,JOB_BROKEN,JOB_SCH_LIM_REACHED,JOB_CHAIN_STALLED,JOB_OVER_MAX_DUR');
END;
/

Replace: ‘JOB_NAME’ that you want to send email notification for

Email: replace email where you want send the email to

Events: This can be specified if notifying on specific events such only notifying on success, or failure etc..

 

Sample Email:

here is the sample email that you receive .

From: MAILER-DAEMON
Sent: Tuesday, February 07, 2017 11:49 AM
To: <you>
Subject: Oracle Scheduler Job Notification – <JOB_NAME> JOB_SUCCEEDED

Message: 

Job: <JOB_NAME>

Event: JOB_SUCCEEDED

Date: 07-FEB-17 11.00.13.163348 AM -05:00 Log id: 6312345 Job class: DEFAULT_JOB_CLASS Run count: 4 Failure count: 2 Retry count: 0 Error code: 0 Error message:

 

 

 

In Data warehouse project with the star schema, it is recommended to run queries with STAR TRANSFORMATION Enabled for better performance. I had same exact scenario in my project where FACT table is joined with few DIM tables.

(DB version: 11g – 11.2.0.4)

Our FACT table –

Partitioned by DATE, then sub partitioned by GEO, table has 7+ billion records

DIM tables –

Some of these were hash partitioned.

Many article suggests that following are pre-requisites for the START transformation to work

  1. A bitmap index should be built on each of the foreign key columns of the fact table or tables.
  2. The initialization parameter STAR_TRANSFORMATION_ENABLED should be set to TRUE. This enables an important optimizer feature for star-queries. It is set to FALSE by default for backward-compatibility.

As per the foreign key – I don’t see any improvements if I have foreign keys on fact tables. I do have bitmap indexes on all columns that were referred to DIM tables in my query though

Initially I didn’t have foreign keys, – generated plan. Then I added foreign keys, analyzed tables, again generated plan – they are exactly same.

These queries runs more than 15 min and they times out on our OBIEE reports. So I couldn’t gain advantaged with foreign keys in place as many articles suggested to use.

However, I have noticed in the my plan that FACT table and one of the DIM tables were NESTED joined. I added a hint use_hash to join them using HASH and disabled START_TRANSFORMATION (you can use the hint : NO_STAR_TRANSFORMATION), then my query returned same results within few seconds.

Conclusion is that don’t just rely on the STAR TRANSFORMATION, instead look at the plan , disable and test to make sure you are returning data as fast as you can.

If you have encountered similar issues and if you gained or regressed with this option, please comment.

Unique index Facts

Disabled unique index:

If the table has unique index in disabled mode, you can’t insert data into the table. It raises following error.

ORA-01502: index ‘xxxx.xxxx’ or partition of such index is in unusable state

Only solution is either to drop or rebuild this index.

eg.

SQL> drop table test_tab
Table dropped.
Elapsed: 00:00:00.14
SQL> create table test_tab as select * from user_objects where rownum < 1
Table created.
Elapsed: 00:00:01.69
SQL> select * from test_tab
no rows selected.
Elapsed: 00:00:00.02
SQL> create unique index test_tab_ui on test_tab(object_name)
Index created.
Elapsed: 00:00:00.03
SQL> alter index test_tab_ui unusable
Index altered.
Elapsed: 00:00:00.01
SQL> select index_name, status from user_indexes where index_name like ‘TEST_TAB%’

INDEX_NAME STATUS
—————————— ——–
TEST_TAB_UI UNUSABLE
1 row selected.
Elapsed: 00:00:00.15
SQL> insert into test_tab
select * from user_objects where rownum < 11
insert into test_tab
select * from user_objects where rownum < 11
Error at line 18
ORA-01502: index ‘<schema>.TEST_TAB_UI’ or partition of such index is in unusable state

 

Local Unique index:

It is not possible to create a local unique index on table column that is NOT used in partition . When you try to create local unique index, it raises following error;

ORA-14039: partitioning columns must form a subset of key columns of a UNIQUE index

eg:

SQL> drop table test_tab
Table dropped.
Elapsed: 00:00:00.15

– Creating table as user_objects
— notice that it is partitioned by column “object_id”
— Do not matter what kind of partition it is .
SQL> —
SQL> CREATE TABLE TEST_TAB
(
OBJECT_NAME VARCHAR2(128 BYTE),
SUBOBJECT_NAME VARCHAR2(30 BYTE),
OBJECT_ID NUMBER,
DATA_OBJECT_ID NUMBER,
OBJECT_TYPE VARCHAR2(19 BYTE),
CREATED DATE,
LAST_DDL_TIME DATE,
TIMESTAMP VARCHAR2(19 BYTE),
STATUS VARCHAR2(7 BYTE),
TEMPORARY VARCHAR2(1 BYTE),
GENERATED VARCHAR2(1 BYTE),
SECONDARY VARCHAR2(1 BYTE),
NAMESPACE NUMBER,
EDITION_NAME VARCHAR2(30 BYTE)
)
partition by hash(object_id) PARTITIONS 32
Table created.
Elapsed: 00:00:00.09


— Creating index on “data_object_id” column which is not part of the partition.

SQL> CREATE UNIQUE INDEX TEST_TAB_UI ON TEST_TAB
(DATA_OBJECT_ID)
LOCAL
CREATE UNIQUE INDEX TEST_TAB_UI ON TEST_TAB
(DATA_OBJECT_ID)
LOCAL
Error at line 30
ORA-14039: partitioning columns must form a subset of key columns of a UNIQUE index

 

However, this index can be created as non-unique.

SQL> CREATE INDEX TEST_TAB_UI ON TEST_TAB
(DATA_OBJECT_ID)
LOCAL

Index created.
Elapsed: 00:00:01.07

 

Tested DB : 11g

2014 in review

The WordPress.com stats helper monkeys prepared a 2014 annual report for this blog.

Here’s an excerpt:

The concert hall at the Sydney Opera House holds 2,700 people. This blog was viewed about 11,000 times in 2014. If it were a concert at Sydney Opera House, it would take about 4 sold-out performances for that many people to see it.

Click here to see the complete report.

To refresh Materialized views, you can use any of the following two APIs – DBMS_MVIEW and DBMS_SNAPSHOT

There is no difference between using DBMS_MVIEW and DBMS_SNAPSHOT. Both packages are identical in their behavior, the supported method from 8i onward is to use DBMS_SNAPSHOT

However DBMS_REFRESH is used for Refresh Groups and is different to the above two. This enables to create Groups that can be refreshed together to transactionally consistent point in time.

To Shrink Table -
1) Make sure that there are no ROWID based Materialized view exists, if so, drop them.
 As Shrink table may change the rowids, ROWID based MV must be dropped
2) Make sure that no Function based indexes exist
3) Enable Table row movement, if it is disabled
 alter table <table_name> enable row movement
4) Execute Shrink command
 alter table <table_name> shrink space;
5) if you had Table row movement disabled, put it back
 alter table <table_name> disable row movement
6) Re-Create function based indexes that were dropped earlier
7) Re-Create ROWID based Materialized view that was dropped in earlier

If you are using Oracle 11g database and SQL Plan baselines you can use the SQL Plan even that is optimal if you can’t change the application code.

Your database should have db parameter optimizer_use_sql_plan_baselines = TRUE (Which is default)

  • Open SQL Plus session
  • If you have db parameter set to optimizer_capture_sql_plan_baselines = FALSE (which is default) – execute following to capture sql plans that would be generated by sql command
alter session Set optimizer_capture_sql_plan_baselines = TRUE;
  • Execute the SQL statement (with hint) so it generates and captures the plan by SPM.
  • Now session can be disconnected as new plan is generated and stored in the baselines.
  • Obtain the SQL_Handle and Plan_name using the below sql statement
select sql_handle, plan_name, sql_text, enabled, accepted, fixed from DBA_SQL_PLAN_BASELINES   
where upper(sql_text) like '%<portion of your sql to identify>%'
  • Mark the appropriate plan as FIXED using following command
SET SERVEROUTPUT ON
DECLARE
l_plans_altered PLS_INTEGER;
BEGIN
l_plans_altered := DBMS_SPM.alter_sql_plan_baseline(
sql_handle => '<sql_handle obtained from above query?',
plan_name => '<Obtain right plan name from above query for the plan you want to use>',
attribute_name => 'fixed',
attribute_value => 'YES');
DBMS_OUTPUT.put_line('Plans Altered: ' || l_plans_altered);
END;
/

Now this plan is marked as fixed, it will be used for feature executions of this SQL statement.

Reference: Arup Nanda Article: http://www.oracle.com/technetwork/issue-archive/2009/09-mar/o29spm-092092.html

Hi,
We have heard about SPM and Plan Base line and how they help to improve the Database performance. Here I have little tricky question if some know for sure and I don’t think if Oracle has documented related to this.

We have Oracle db v 11.2.0.3 instance where we used to have a schema name :(say ‘DEVX’) and our application used to run against this schema. At some point in time, we stopped using this schema and dropped, instead we pointed our application against another schema called (say ‘DEV’), but it same application.

When I see sql plan base lines I see that plans were created by old schema (DEVX) and current schema (DEV) . But those plans created by old schema – DEVX had LAST_VERIFIED column blank where plans are ENABLED and some plans even have ACCEPTED : YES.

Saying that If LAST_VERIFIED column value is blank, are these plans are being used by optimizer and is it safe to delete plans created by old schema – DEVX ?

My colleague was trying rebuild index and he added script assuming that all indexes can be rebuilt online. But ended up with errors as his database contains all sorts of indexes – Normal , function based, Domain, LOB and IOT index.

Here is the list of index types and which can be performed online.

Normal and Function Based index

These can be rebuilt online without any problem

alter <index_name> rebuild online;

Domain Index

These can be rebuilt online however can’t specify “no logging”

LOB indexes

These needs to be Moved.

alter table <your_table> move lob (<lob_column>) store as (tablespace <tablespace>);

Index-Organized Tables (IOT)

Can’t be rebuilt Online, instead they needs to be moved online.

Per documentation…Because index-organized tables are primarily stored in a B-tree index, can encounter fragmentation as a consequence of incremental updates. MOVE statement reduces the fragmentation.

ALTER TABLE <IOT_name> MOVE ONLINE;

Other Restrictions Using ONLINE clause:

  • Parallel DML is not supported during online index building. If you specify ONLINE and subsequently issue parallel DML statements, then Oracle Database returns an error.
  • You cannot specify ONLINE for a bitmap join index or a cluster index.
  • For a nonunique secondary index on an index-organized table, the number of index key columns plus the number of primary key columns that are included in the logical rowid in the index-organized table cannot exceed 32. The logical rowid excludes columns that are part of the index key.