A few basic notes on Oracle Database Administration.
Saturday, January 26, 2008
Getting Biblical
Apparently, the day before Jan 1, 0 AD is Dec 31, 1 BC.
I didn't expect it. By the way, biblical historians say that this change of dates occurred neither at the birth nor the death, but when he was something like 4 years old.
Anyway, here is what the Oracle of Redwood City had to say on the topic.
alter session set nls_date_format='FMDay ddth Month YYYY B.C.';
SQL> select trunc(date '-0-1-1','yyyy') from dual;
TRUNC(DATE'-0-1-1','YYYY')
----------------------------------
Thursday 1st January 0 A.D.
SQL> select trunc(date '-0-1-1','yyyy')-1 from dual;
TRUNC(DATE'-0-1-1','YYYY')-1
----------------------------------
Wednesday 31st December 1 B.C.
Tuesday, January 22, 2008
Real Application Testing - RATs!!
In Real Application Testing you capture the actual production workload for replay and comparison on a test system. Nice idea, but the proposed test system is the same size, and has a much machinery and disk space behind it as the production system. I don't know about you, but I have never seen such a test system as big as the production system.
The lecturer suggested that
- You could discount any disparities that are to be blamed on the platform difference.
- You could run the workload on the test system against the old application, and then run the workload against the new application (or database version) on the test system and make a non production apples to apples.
There was a lot of emphasis given that a production workload is much faster to prepare than a test workload,(created by testers) complete with charts to prove the point. But a production workload is going to be full of social security numbers and credit card numbers, etc. The presenter suggested that you could mask sensitive data, but there was no component for this item in the time line that compared traditional testing with Real Application Testing. I have just read in Lutz Hartmann's blog that Oracle was announcing in November that the ability to mask 'was coming'. I don't know if it is here yet.
Overall, Real Application Testing seemed like a good idea that will be much better by Oracle 12 or 13.
Friday, January 18, 2008
MATERIALIZED VIEWS
A materialized view provides access to table data by storing results of a query in a separate schema object.
Oracle maintains the data in materialized views by refreshing them after changes are made to their master tables. The refresh method can be incremental, (fast refresh) or complete. For materialized views that use the fast refresh method, a materialized view log keeps a record of changes to the master tables.
Materialized views can be refreshed either on demand or at regular time intervals. Normal replication occurs at the row level.
Setup involves
- Configuring certain DB parameters
- Configuring the replicated objects
Parameters that need to be something other than the default:
- Job_queue_processes – at least 1
- Open_links – one additional link for each DB that will be replicated to.
- Shared_pool_size – add 20 Mg for basic replication.
SQL> CREATE MATERIALIZED VIEW my_emp_pk
REFRESH FAST START WITH SYSDATE
NEXT SYSDATE + 1/48
WITH PRIMARY KEY
AS SELECT * FROM emp;
Materialized view created.
SQL> CREATE MATERIALIZED VIEW LOG ON emp;
Materialized view log created.
REFRESH:
[refresh [fast|complete|force]
[on demand | commit]
[start with date] [next date]
[with {primary key | rowed}]]
The refresh option specifies:
- The refresh method used by Oracle to refresh data in a materialized view.
- Whether the view is primary key based or row id based.
- The time and interval at which the view is to be refreshed.
Refresh Method:
- FAST
- You should create a materialized view log for the master tables if you specify REFRESH FAST.
- Materialized views are not eligible for fast refresh if the defined subquery contains an analytic function.
- COMPLETE – recreates the entire materialized view.
- FORCE – Performs a fast refresh if one is possible, or a complete refresh otherwise. FORCE is the default.
PRIMARY KEY / ROWID Clause
- WITH PRIMARY KEY – Materialized view is based on the primary key of the master table. This is the default.
- Primary Key materialized views allow materialized view master tables to be reorganized without affecting the eligibility of the materialized view for fast refresh.
- ROWID materialized views should have a single master table and cannot contain:
- Distinct or aggregate functions
- Group by subqueries
- Joins & Set operations
OTHER:
SQL> execute DBMS_REFRESH.DESTROY(name=> ‘my_group1’);
Removes all materialized views from the refresh group and deletes the refresh group.
SQL> select job, what, failures from user_jobs;
To see if any refreshes are failing.
References:
Materialized Views in Oracle By Ajay Gursahani (most of the material in this blog item is taken directly from this article)
Oracle Replication Expert Methods for Robust Data Sharing. By Karam, Garmany, Burleson. Rampant Press
The magical MIT drawing/motion simulator.
MIT system that allows the user to draw a design on a white board, and then see how it operates in motion just from the drawing! I haven't adequately described how amazing it is. Take a look.
http://www.youtube.com/watch?v=NZNTgglPbUA
Thursday, January 17, 2008
Northern California - NoCOUG conference 2/19/08
The NoCOUG site for this is http://www.nocoug.org/next.html
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Copyright © 2008 NoCOUG. All rights reserved. |
Tuesday, January 15, 2008
Five things about Data Pump.
Nobody tagged me, but here are 5 things about Data Pump that it would be nice to have available in my brain when someone asks me.
1. Description
Data Pump is a utility that moves data and metadata.
To the user it looks like export/import, but they are separate utilities and their files are not compatible.
Data Pump runs on the server. This helps performance since there is no movement of data back and forth across the network between client and server.
2. Dump Files
Export dumps are a bunch of DDL and INSERT statements.
DataPump dump files in binary format - very similar to the format stored in Oracle database datafiles inside of tablespaces.
3. Usage
You may export/import tables, schema, or the whole database
You may export/import just metadata, or metadata and data
You may run Data Dump export in Estimate Only mode to determine the amount of space that will be required for the dump file.
You need to create a directory on the server to receive the dump files.
The schema that will be running DataPump needs read/write privileges on the directory.
You can disconnect/ reconnect from/to a data pump job without stopping it.
You can connect and check the status of a running Data Pump job
A data pump job can be restarted if it fails.
4. Views
You can check the existence of a directory in the dba_directories view.
You can check the privileges that a user has on the directory in the dba_tab_privs view. The directory name is found in the table_name field.
5. Architecture
The major components of Data Pump are:
Master Table – holds the job info while the job is running.
Processes – including:
Master process – controls execution of the job
Client process – the expdp & impdp
Shadow process – Creates the Master table & the AQ queues.
Worker processes – does the actual load & unload
There are 8 (or maybe 42) more things that I know about and would love to be able to talk about without stumbling and sounding like an idiot! More posts to follow.
Sunday, January 6, 2008
I passed the 10g OCP exam. Three Cheers!
Yesterday I passed the Oracle 10g New Features for DBA’s certification exam. Three cheers!!!
When I walked out of the exam center I found myself saying, “I have to hurry home to study for the exam.” Then I realized, I don’t have to study this information now. I passed the test. I can now start looking at all of the interesting things that have been passing by in the blogs. There are so many topics that I would like to know something about. Now I can begin to learn.