Manage additional buffer cache

Go back to OCM 12c Preparation Project 

Hands On Lab – Index

[1] In-Memory option
[2] Automatic Big Table Caching
[3] Force Full Database Caching Mode
[4] Database Smart Flash Cache

[1] In-Memory option

6 Managing Memory
6.7 Using the In-Memory Column Store

The In-Memory Column Store (IM column store) is an optional portion of the 
system global area (SGA) that stores copies of tables, table partitions, 
and other database objects. In the IM column store, data is populated by 
column rather than row as it is in other parts of the SGA, and data is 
optimized for rapid scans. The IM column store is included with the Oracle 
Database In-Memory option.


6.7.1 About the IM Column Store
You can choose from various compression methods and data population options 
for the IM column store.


6.7.1.1 Overview of the IM Column Store

The In-Memory Column Store (IM column store) was the headline feature of 
the 12.1.0.2 patchset. 
This features allows you to store columns, tables, partitions and 
materialized views in memory in a columnar format, rather than the 
typical row format. The advantage of having data in memory is obvious, 
but the columnar storage lends itself extremely well to analytic queries 
found in business intelligence products.

Note:
The IM column store is a separately licensed option of Oracle Database 
Enterprise Edition.

The In-Memory column store is a new section of the SGA, sized using the 
INMEMORY_SIZE initialization parameter. You can choose to store specific 
groups of columns, whole tables, materialized views or table partitions 
in the store. Alternatively, you can enable IM column store at the tablespace 
level, so all tables and materialized views in the tablespace are automatically 
enabled for the IM column store. The following commands have been modified to 
include additional in-memory clauses.


Typically, multi-column indexes are created to improve the performance of 
analytic and reporting queries. 
These indexes can impede the performance of data manipulation language 
(DML) statements. When a database object is populated in the IM column store, 
indexes used for analytic or reporting queries can be reduced or eliminated 
without affecting query performance. Eliminating these indexes can improve 
the performance of transactions and data loading operations.

CREATE TABLE
ALTER TABLE
CREATE TABLESPACE
ALTER TABLESPACE
CREATE MATERIALIZED VIEW
ALTER MATERIALIZED VIEW

The documentation claims the IM column store is good for the following.
-> Large scans that apply "=", "<", ">" and "IN" filters.
-> Queries that return a small number of columns from a table with a large number of columns.
-> Queries that join small tables to large tables.
-> Queries that aggregate data.

It also states it is not designed for the following.

-> Queries with complex predicates.
-> Queries that return a large number of columns.
-> Queries that return large numbers of rows.
-> Queries with multiple large table joins.

Note:
Also, a database object cannot be populated in the IM column store if it 
is owned by the SYS user and it is stored in the SYSTEM or SYSAUX tablespace.


Note:
The important thing to remember here is *you* will be responsible for 
deciding which objects will benefit the most from inclusion in the IM 
column store. If you choose wisely you will see big improvements in performance. 
If you choose badly, you will waste a lot of memory that could be used by 
the buffer cache.


6.7.1.2 IM Column Store Compression Methods
In the IM column store, data can be compressed, and SQL queries execute 
directly on compressed data.

Table 6-5 IM Column Store Compression Methods

Compression Method in CREATE/ALTER Syntax

NO MEMCOMPRESS 
-> The data is not compressed.
MEMCOMPRESS FOR DML 
-> This method optimizes the data for DML operations and compresses IM column store data the least
(excluding NO MEMCOMPRESS).
MEMCOMPRESS FOR QUERY LOW 
-> This method results in the best query performance.
-> This method is the default when the INMEMORY clause is specified without a compression method.
or when MEMCOMPRESS FOR QUERY is specified without including either LOW or HIGH.

MEMCOMPRESS FOR QUERY HIGH 
-> This method results in excellent query performance.

MEMCOMPRESS FOR CAPACITY LOW 
-> This method results in good query performance.
-> This method is the default when MEMCOMPRESS FOR CAPACITY is specified without including either LOW or HIGH.

MEMCOMPRESS FOR CAPACITY HIGH : This method results in fair query performance.
-> This method compresses IM column store data the most.

6.7.1.3 IM Column Store Data Population Options

IM Column Store Data Population Options
Table 6-6 Priority Levels for Populating a Database Object in the IM Column Store

Specify Priority Levels during CREATE/ALTER Syntax

PRIORITY NONE
Oracle Database controls when the database object's data is populated in 
the IM column store. A scan of the database object triggers the population 
of the object into the IM column store. This is the default level when 
PRIORITY is not included in the INMEMORY clause.

PRIORITY LOW
The database object's data is populated in the IM column store before
database objects with the following priority level: NONE.
The database object's data is populated in the IM column store after 
database objects with the following priority levels: MEDIUM, HIGH, or CRITICAL.

PRIORITY MEDIUM
The database object's data is populated in the IM column store before 
database objects with the following priority levels: NONE or LOW.
The database object's data is populated in the IM column store after 
database objects with the following priority levels: HIGH or CRITICAL.

PRIORITY HIGH
The database object's data is populated in the IM column store before 
database objects with the following priority levels: NONE, LOW, or MEDIUM.
The database object's data is populated in the IM column store after database 
objects with the following priority level: CRITICAL.

PRIORITY CRITICAL
The database object's data is populated in the IM column store before 
database objects with the following priority levels: NONE, LOW, MEDIUM, 
or HIGH.

------
Notes
------
1. When more than one database object has a priority level other than NONE, 
Oracle Database queues all of the data for the database objects to be populated 
in the IM column store based on priority level. 
Data for database objects with the CRITICAL priority level are populated first, 
data for database objects with the HIGH priority level are populated next, 
and so on. If there is no space remaining in the IM column store, then no 
additional objects are populated in it until sufficient space becomes 
available.

2. When a database is restarted, all of the data for database objects with 
a priority level other than NONE are populated in the IM column store during startup.

3. For a database object with a priority level other than NONE, an 
ALTER TABLE or ALTER MATERIALIZED VIEW 
DDL statement involving the database object does not return until the 
DDL changes are recorded in the IM column store.

4. The priority level setting must apply to an entire table or to a table
partition. Specifying different IM column store priority levels for different 
subsets of columns in a table is not allowed.

5. If a segment on disk is 64 KB or less, then it is not populated in the 
IM column store. Therefore, some small database objects that were enabled 
for the IM column store might not be populated in it.

6.7.2 Initialization Parameters Related to the IM Column Store

A set of initialization parameters are related to the IM column store.

-------------
INMEMORY_SIZE
-------------
-> The default value is 0, which means that the IM column store is not used.
-> Must be set to a non-zero value to enable the IM column store. If the parameter is set to a non-zero value, 
then the minimum setting is 100M.

Example: Configure the database to use IM Column Store;

$ sql
SQL*Plus: Release 12.1.0.2.0 Production on Wed Sep 4 17:03:49 2019

SQL> show parameter inmemory_size

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
inmemory_size big integer 0
SQL>
SQL> alter system set inmemory_size=100M scope=both;
alter system set inmemory_size=100M scope=both
*
ERROR at line 1:
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-02095: specified initialization parameter cannot be modified

SQL> alter system set inmemory_size=100M scope=spfile;

System altered.

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL>
SQL> startup
ORACLE instance started.

Total System Global Area 1073741824 bytes
Fixed Size 2932632 bytes
Variable Size 708837480 bytes
Database Buffers 251658240 bytes
Redo Buffers 5455872 bytes
In-Memory Area 104857600 bytes
Database mounted.
Database opened.

Notice the "In-Memory Area" line produced during the startup.


SQL> show parameter inmemory_size

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
inmemory_size big integer 100M

The INMEMORY_SIZE parameter setting is inherited by all PDBs unless it is 
explicitly set at the PDB level. 
Changing the INMEMORY_SIZE parameter value at the PDB level does not require 
a restart of the instance or PDB.

SQL> show pdbs

CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
4 PDB2 READ WRITE NO
5 PDB3 MOUNTED
SQL>
SQL> alter session set container=pdb2;

Session altered.

SQL> show parameter inmemory_size

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
inmemory_size big integer 100M
SQL>

-- Disable IM column store in the PDB
ALTER SYSTEM SET INMEMORY_SIZE=0;
-- OR
ALTER SYSTEM RESET INMEMORY_SIZE;

-- Assign a PDB-specific size.

SQL> alter system set inmemory_size=10M;

System altered.

--------------- 
INMEMORY_FORCE
---------------
-> The default value is DEFAULT to allow the INMEMORY or NO INMEMORY attributes 
on the individual database objects determine if they will be populated in 
the IM column store for tables and materialized views 
-> Setting the parameter to "OFF" means objects will not be maintained in 
the IM column store.

Example : System level

ALTER SYSTEM SET INMEMORY_FORCE=OFF;
ALTER SYSTEM SET INMEMORY_FORCE=DEFAULT;

-----------------------
INMEMORY_CLAUSE_DEFAULT
-----------------------
-> This parameter enables you to specify a default IM column store clause 
for new tables and materialized views.
-> Leave this parameter unset or set it to an empty string or NO INMEMORY 
to specify that there is no default IM column store 
clause for new tables and materialized views.

-> If the clause starts with INMEMORY, then all new tables and materialized 
views, including those without an INMEMORY clause, are populated in the 
IM column store. If the clause omits INMEMORY, then it only applies 
to new tables and materialized views that are enabled for the IM column 
store with an INMEMORY clause during 
creation.

The following statement causes no tables to populate the IM column store:
alter system set INMEMORY_CLAUSE_DEFAULT='NO INMEMORY' scope=spfile;

The following statement causes new tables and materialized views (except 
those specified as NO INMEMORY) to populate the IM column store at the 
high capacity compression level:

alter system set INMEMORY_CLAUSE_DEFAULT='INMEMORY MEMCOMPRESS FOR CAPACITY HIGH' scope=spfile;

The following statement causes new tables (even those specified as NO INMEMORY) 
to populate the IM column store at the highest performance compression level 
at LOW priority:

alter system set INMEMORY_CLAUSE_DEFAULT='INMEMORY MEMCOMPRESS FOR QUERY LOW PRIORITY LOW' scope=spfile;

The following statement causes new tables (even those specified as NO INMEMORY) 
to populate the IM column store 
without any in-memory compression:

alter system set INMEMORY_CLAUSE_DEFAULT='INMEMORY NO MEMCOMPRESS' scope=spfile;

The following statement causes tables in the IM column store to be duplicated 
on every Oracle RAC instance, unless on a non-engineered system. For a 
non-engineered system, the duplicate-clause (DUPLICATE ALL) will be 
ignored and tables in the column store will be automatically distributed 
across the Oracle RAC instance, with only one copy of each IMCU in the tables 
in the IM column store:

alter system set INMEMORY_CLAUSE_DEFAULT='INMEMORY MEMCOMPRESS FOR QUERY DISTRIBUTE AUTO DUPLICATE ALL'
scope=spfile;

The following statement sets the value of the INMEMORY_CLAUSE_DEFAULT 
parameter back to its default value, 
the empty string:

alter system set INMEMORY_CLAUSE_DEFAULT='' scope=spfile;

---------------
INMEMORY_QUERY
---------------
-> This parameter specifies whether in-memory queries are allowed. Set 
this parameter to ENABLE, the default value, to allow queries to access 
database objects populated in the IM column store, 
-> set this parameter to DISABLE to disable access to the database objects 
populated in the IM column store.

Example:

-- System level
ALTER SYSTEM SET INMEMORY_QUERY=DISABLE;
ALTER SYSTEM SET INMEMORY_QUERY=ENABLE;

-- Session level
ALTER SESSION SET INMEMORY_QUERY=DISABLE;
ALTER SESSION SET INMEMORY_QUERY=ENABLE;

-----------------------------
INMEMORY_MAX_POPULATE_SERVERS
-----------------------------
This parameter specifies the maximum number of background populate servers 
to use for IM column store population, so that these servers do not overload 
the rest of the system. Set this parameter to an appropriate value based on 
the number of cores in the system.

When INMEMORY_MAX_POPULATE_SERVERS is set to 0, objects cannot be populated 
in the IM column store.

-> This parameter has meaning only if the INMEMORY_SIZE parameter is also 
set to a positive value.

SQL> show parameter inmemory

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
inmemory_max_populate_servers integer 0
inmemory_size big integer 0

SQL> alter system set inmemory_size=100m scope=spfile;

System altered.

SQL> shutdown immediate
Database closed.
Database dismounted.
startup

ORACLE instance shut down.
SQL> ORACLE instance started.

Total System Global Area 805306368 bytes
Fixed Size 2929552 bytes
Variable Size 339741808 bytes
Database Buffers 352321536 bytes
Redo Buffers 5455872 bytes
In-Memory Area 104857600 bytes
Database mounted.
Database opened.
SQL> SQL>
SQL>
SQL> show parameter inmemory

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
inmemory_max_populate_servers integer 1
inmemory_size big integer 100M

SQL> show parameter cpu

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 2
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 2

-> Default value is Half the effective CPU thread count or the PGA_AGGREGATE_TARGET 
value divided by 512M, whichever is less.

INMEMORY_TRICKLE_REPOPULATE_SERVERS_PERCENT

-> This parameter limits the maximum number of background populate servers 
used for In-Memory Column Store repopulation, as trickle repopulation is 
designed to use only a small percentage of the populate servers. 
The value for this parameter is a percentage of the 
INMEMORY_MAX_POPULATE_SERVERS parameter value.

-> This parameter has meaning only if the INMEMORY_SIZE parameter is also 
set to a positive value.
For example, if this parameter is set to 5 and INMEMORY_MAX_POPULATE_SERVERS 
is set to 10, then on average half of a core is used for trickle repopulation.

-> The default value of 1 is good in most cases. In some cases, if you want 
to disable trickle repopulate, this parameter can be set to 0. If you want 
to keep the system more aggressively up to date (at the expense of more 
background CPU), you can set the parameter to higher values such as 5 or 10.

------------------------
OPTIMIZER_INMEMORY_AWARE
------------------------
-> Default value is true.
-> This parameter enables or disables all of the optimizer cost model 
enhancements for in-memory. 
-> Setting the parameter to FALSE causes the optimizer to ignore the 
in-memory property of tables during the optimization of SQL statements.

Example : Disable In-Memory Column Store

There are several ways to disable the IM column store, depending on what 
you are trying to achieve.

1. Setting the INMEMORY_FORCE parameter to "OFF" means objects will not 
be maintained in the IM column store.

ALTER SYSTEM SET INMEMORY_FORCE=OFF;

2. Setting the INMEMORY_QUERY parameter to "DISABLE" means the optimiser 
will not consider the IM column store to optimise queries. Switching it 
back to "ENABLE" reverts it to the default functionality.

-- System level
ALTER SYSTEM SET INMEMORY_QUERY=DISABLE;

-- Session level
ALTER SESSION SET INMEMORY_QUERY=DISABLE;

3. To disable the IM column store completely and release the memory, reset 
the INMEMORY_SIZE parameter.
ALTER SYSTEM RESET INMEMORY_SIZE SCOPE=SPFILE;
SHUTDOWN IMMEDIATE;
STARTUP; 

Note:
As described in the previous section, the PDB-specific settings can be 
altered without an instance or PDB restart.

6.7.4 Enabling and Disabling Tables for the IM Column Store

#Managing Tables

You enable a table for the IM column store by including an INMEMORY clause 
in a CREATE TABLE or ALTER TABLE statement. 
You disable a table for the IM column store by including a NO INMEMORY clause 
in a CREATE TABLE or ALTER TABLE statement.

SQL> connect test/test@racnode2/pdb3
Connected.
SQL>
SQL> CREATE TABLE im_tab (
id NUMBER
) INMEMORY; 2 3

Table created.

SQL> CREATE TABLE noim_tab (
id NUMBER
) NO INMEMORY;

Table created.

SQL> CREATE TABLE default_tab (
id NUMBER
);
Table created.

SQL> COLUMN table_name FORMAT A20
SELECT table_name,
inmemory,
inmemory_priority,
inmemory_distribute,
inmemory_compression,
inmemory_duplicate
FROM user_tables
ORDER BY table_name;

TABLE_NAME INMEMORY INMEMORY INMEMORY_DISTRI INMEMORY_COMPRESS INMEMORY_DUPL
-------------------------------------------------------------------------
DEFAULT_TAB DISABLED
IM_TAB ENABLED NONE AUTO FOR QUERY LOW NO DUPLICATE
NOIM_TAB DISABLED


-> The ALTER TABLE command can change the IM status of the objects. 
The following example flips the status.

SQL> ALTER TABLE IM_TAB NO INMEMORY;

Table altered.

SQL> ALTER TABLE NOIM_TAB INMEMORY MEMCOMPRESS FOR CAPACITY LOW;

Table altered.

SQL> ALTER TABLE DEFAULT_TAB INMEMORY PRIORITY HIGH;

Table altered.

SQL> SELECT table_name,
inmemory,
inmemory_priority,
inmemory_distribute,
inmemory_compression,
inmemory_duplicate
FROM user_tables
ORDER BY table_name;

TABLE_NAME INMEMORY INMEMORY INMEMORY_DISTRI INMEMORY_COMPRESS INMEMORY_DUPL
------------------------------------------------------------------------

DEFAULT_TAB ENABLED HIGH AUTO FOR QUERY LOW NO DUPLICATE
IM_TAB DISABLED
NOIM_TAB ENABLED NONE AUTO FOR CAPACITY LOW NO DUPLICATE


#Managing Columns
The following example shows the syntax for including a subset of columns 
in the IM column store.

SQL> CREATE TABLE im_col_tab (
id NUMBER,
col1 NUMBER,
col2 NUMBER,
col3 NUMBER,
col4 NUMBER
) INMEMORY
INMEMORY MEMCOMPRESS FOR QUERY HIGH (col1, col2)
INMEMORY MEMCOMPRESS FOR CAPACITY HIGH (col3)
NO INMEMORY (id, col4);

The column settings are displayed using the V$IM_COLUMN_LEVEL view.

SQL> SELECT table_name,
segment_column_id,
column_name,
inmemory_compression
FROM v$im_column_level
WHERE owner = 'TEST'
and table_name = 'IM_COL_TAB'
ORDER BY segment_column_id; 2 3 4 5 6 7 8

TABLE_NAME SEGMENT_COLUMN_ID COLUMN_NAME INMEMORY_COMPRESSION
-------------------------------------------------------------------------------
IM_COL_TAB 1 ID NO INMEMORY
IM_COL_TAB 2 COL1 FOR QUERY HIGH
IM_COL_TAB 3 COL2 FOR QUERY HIGH
IM_COL_TAB 4 COL3 FOR CAPACITY HIGH
IM_COL_TAB 5 COL4 NO INMEMORY

ALTER TABLE im_col_tab 
NO INMEMORY (col1, col2)
INMEMORY MEMCOMPRESS FOR CAPACITY HIGH (col3)
NO INMEMORY (id, col4);

TABLE_NAME SEGMENT_COLUMN_ID COLUMN_NAME INMEMORY_COMPRESSION
------------------------------------------------------------------------------
IM_COL_TAB 1 ID NO INMEMORY
IM_COL_TAB 2 COL1 NO INMEMORY
IM_COL_TAB 3 COL2 NO INMEMORY
IM_COL_TAB 4 COL3 FOR CAPACITY HIGH
IM_COL_TAB 5 COL4 NO INMEMORY

6.7.6 Enabling and Disabling Materialized Views for the IM Column Store

#Managing Materialized Views
CREATE TABLE t1 AS
SELECT * FROM all_objects; 2

CREATE MATERIALIZED VIEW t1_mv INMEMORY
AS SELECT * FROM T1;

SELECT table_name,
inmemory,
inmemory_priority,
inmemory_distribute,
inmemory_compression,
inmemory_duplicate
FROM user_tables
WHERE table_name = 'T1_MV';

TABLE_NAME INMEMORY INMEMORY INMEMORY_DISTRI INMEMORY_COMPRESS INMEMORY_DUPL
-------------------------------------------------------------------------
T1_MV ENABLED NONE AUTO FOR QUERY LOW NO DUPLICATE

alter materialized view t1_mv
inmemory memcompress for capacity high priority high;

Materialized view altered.

TABLE_NAME INMEMORY INMEMORY INMEMORY_DISTRI INMEMORY_COMPRESS INMEMORY_DUPL
------------------------------------------------------------------------
T1_MV ENABLED HIGH AUTO FOR CAPACITY HIGH NO DUPLICATE


ALTER MATERIALIZED VIEW t1_mv NO INMEMORY;

TABLE_NAME INMEMORY INMEMORY INMEMORY_DISTRI INMEMORY_COMPRESS INMEMORY_DUPL
------------------------------------------------------------------------
T1_MV DISABLED

6.7.5 Enabling and Disabling Tablespaces for the IM Column Store
#Managing Tablespaces

SQL> alter session set container=pdb3;

Session altered.

SQL> select name from v$datafile;

NAME
------------------------------------------------------------------------
/u03/app/oracle/oradata/cdb2/undotbs01.dbf
/u03/app/oracle/oradata/cdb2/pdb3/system01.dbf
/u03/app/oracle/oradata/cdb2/pdb3/sysaux01.dbf
/tmp/userspdb3.dbf

SQL> create tablespace test_im
2 datafile '/u03/app/oracle/oradata/cdb2/pdb3/test_im.dbf' size 10M autoextend on
3 default INMEMORY;

Tablespace created.

SQL> SELECT tablespace_name,
def_inmemory,
def_inmemory_priority,
def_inmemory_distribute,
def_inmemory_compression,
def_inmemory_duplicate
FROM dba_tablespaces
ORDER BY tablespace_name;

TABLESPACE_NAME DEF_INME DEF_INME DEF_INMEMORY_DI DEF_INMEMORY_COMP DEF_INMEMORY_
----------------- ------------------------------------------------------
SYSAUX DISABLED
SYSTEM DISABLED
TEMP DISABLED
TEST_IM ENABLED NONE AUTO FOR QUERY LOW NO DUPLICATE
USERS DISABLED

SQL> alter tablespace test_im
2 default inmemory memcompress for capacity high;

Tablespace altered.

TABLESPACE_NAME DEF_INME DEF_INME DEF_INMEMORY_DI DEF_INMEMORY_COMP DEF_INMEMORY_
------------------------------------------------------------------------
SYSAUX DISABLED
SYSTEM DISABLED
TEMP DISABLED
TEST_IM ENABLED NONE AUTO FOR CAPACITY HIGH NO DUPLICATE
USERS DISABLED


SQL> alter tablespace test_im
2 default no inmemory;

Tablespace altered.


TABLESPACE_NAME DEF_INME DEF_INME DEF_INMEMORY_DI DEF_INMEMORY_COMP DEF_INMEMORY_
-------------------------------------------------------------------------
SYSAUX DISABLED
SYSTEM DISABLED
TEMP DISABLED
TEST_IM DISABLED
USERS DISABLED

#Views
V$IM_COLUMN_LEVEL
V$IM_SEGMENTS
V$IM_USER_SEGMENTS

-> Example about how to create a Table and enable it for the IM column store:

CREATE TABLE oraclelearn (
id NUMBER(5) PRIMARY KEY,
blog_type VARCHAR2(15))
INMEMORY;

-> There are several priorities and each one belongs to a determined SQL-Key as below:

ALTER TABLE oraclelearn INMEMORY PRIORITY NONE;
ALTER TABLE oraclelearn INMEMORY PRIORITY HIGH;
ALTER TABLE oraclelearn INMEMORY PRIORITY LOW;
ALTER TABLE oraclelearn INMEMORY PRIORITY MEDIUM;
ALTER TABLE oraclelearn INMEMORY PRIORITY CRITICAL;

Each priority determines how Oracle will manage and populate the datas of 
the object in the IM Column Store.

You can easily check whether your table are IN-MEMORY using the following select:

SELECT table_name, inmemory_compression, inmemory_priority , inmemory_distribute 
FROM user_tables;

OR

SELECT OWNER, SEGMENT_NAME, INMEMORY_PRIORITY, INMEMORY_COMPRESSION 
FROM V$IM_SEGMENTS;

— Disable the oraclelearn for the IM Column Store:
ALTER TABLE oraclelearn NO INMEMORY;

6.7.7 Data Pump and the IM Column Store

-> You can import database objects that are enabled for the IM column 
store using the TRANSFORM=INMEMORY:y option of the impdp command. With 
this option, Data Pump keeps 
the IM column store clause for all objects that have one.

-> When the TRANSFORM=INMEMORY:n option is specified, Data Pump drops the 
IM column store clause from all objects that have one.

Example:

INMEMORY=Y|N

You can also use the TRANSFORM=INMEMORY_CLAUSE:string option to override 
the IM column store clause for a database object in the dump file during 
import. For example, you can use this option to change 
the IM column store compression for a database object being imported.

Example:
INMEMORY_CLAUSE=\"NO INMEMORY\"
INMEMORY_CLAUSE=\"INMEMORY MEMCOMPRESS FOR QUERY HIGH PRIORITY LOW\"


#VIEWS and SQL key statements:

V$IM_SEGMENTS
v$im_column_level
USER_TABLES
USER_TABLESPACES
DBA_DIRECTORIES
SELECT OWNER, SEGMENT_NAME, INMEMORY_PRIORITY, INMEMORY_COMPRESSION 
FROM V$IM_SEGMENTS;

---------------------------
HOL #Test Performance of IM 
---------------------------

15:18:40 SQL> create table test_inmemory
15:18:47 2 (id number,name varchar2(50));

Table created.

SQL>
BEGIN
FOR I IN 1..100000
LOOP
INSERT INTO TEST_INMEMORY
VALUES(I,'Hitesh'||I);
END LOOP;
END;
/

PL/SQL procedure successfully completed.

SQL> insert into test_inmemory select * from test_inmemory;

100000 rows created.

SQL> commit;

Commit complete.

15:22:14 SQL> select count(*) from test_inmemory;

Execution Plan
----------------------------------------------------------
Plan hash value: 1029766195

----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 171 (1)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS FULL| TEST_INMEMORY | 213K| 171 (1)| 00:00:01 |
----------------------------------------------------------------------------

Note
-----
- dynamic statistics used: dynamic sampling (level=2)

15:23:01 SQL> alter table test_inmemory inmemory;

Table altered.

15:23:24 SQL> select count(*) from test_inmemory;

Execution Plan
----------------------------------------------------------
Plan hash value: 1029766195
-------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 7 (0)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS INMEMORY FULL| TEST_INMEMORY | 229K| 7 (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Note
-----
- dynamic statistics used: dynamic sampling (level=2)

15:23:38 SQL>

-----------------------------------------------
HOL Test with IM with DataPump TRANSFORM option
-----------------------------------------------

-bash-4.1$ expdp SH/SH@racnode2/pdb2 tables=TEST_INMEMORY directory=PDB2_DIR

Export: Release 12.1.0.2.0 - Production on Thu Sep 5 15:33:12 2019

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SH"."SYS_EXPORT_TABLE_01": SH/********@racnode2/pdb2 tables=TEST_INMEMORY directory=PDB2_DIR
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 5 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
. . exported "SH"."TEST_INMEMORY" 3.968 MB 200000 rows
Master table "SH"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SH.SYS_EXPORT_TABLE_01 is:
/tmp/expdat.dmp
Job "SH"."SYS_EXPORT_TABLE_01" successfully completed at Thu Sep 5 15:33:33 2019 elapsed 0 00:00:19


-bash-4.1$ export ORACLE_SID=cdb1
-bash-4.1$ sql

SQL> startup
SQL> show pdbs

CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
4 PDB2 READ WRITE NO
SQL> alter session set container=pdb1;

Session altered.

SQL> grant dba to sh identified by sh;

Grant succeeded.

SQL> create directory PDB1_DIR as '/tmp';

Directory created.

SQL> grant read,write on directory PDB1_DIR to sh;

Grant succeeded.

SQL> create tablespace example datafile '/u03/app/oracle/oradata/cdb1/pdb1/example.dbf' size 50M autoextend on;

Tablespace created.

SQL> alter user sh quota unlimited on example;

#TRANSFORM=INMEMORY:y inherits IM export object properties

-bash-4.1$ impdp sh/sh@racnode2/pdb1 directory=PDB1_DIR TRANSFORM=INMEMORY:Y

Import: Release 12.1.0.2.0 - Production on Thu Sep 5 15:40:03 2019

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Master table "SH"."SYS_IMPORT_FULL_02" successfully loaded/unloaded
Starting "SH"."SYS_IMPORT_FULL_02": sh/********@racnode2/pdb1 directory=PDB1_DIR TRANSFORM=INMEMORY:Y
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "SH"."TEST_INMEMORY" 3.968 MB 200000 rows
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Job "SH"."SYS_IMPORT_FULL_02" successfully completed at Thu Sep 5 15:40:29 2019 elapsed 0 00:00:23

-bash-4.1$ sqlplus sh/sh@racnode2/pdb1

SQL> SELECT table_name, inmemory_compression, inmemory_priority , inmemory_distribute FROM user_tables;

TABLE_NAME INMEMORY_COMPRESS INMEMORY INMEMORY_DISTRI
--------------------------------------------------------------
TEST_INMEMORY FOR QUERY LOW NONE AUTO

SQL> drop table test_inmemory purge;


#TRANSFORM=INMEMORY:n does not inherit IM export object properties

-bash-4.1$ impdp sh/sh@racnode2/pdb1 directory=PDB1_DIR TRANSFORM=INMEMORY:N

Import: Release 12.1.0.2.0 - Production on Thu Sep 5 15:59:50 2019

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Master table "SH"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SH"."SYS_IMPORT_FULL_01": sh/********@racnode2/pdb1 directory=PDB1_DIR TRANSFORM=INMEMORY:N
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "SH"."TEST_INMEMORY" 3.968 MB 200000 rows
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Job "SH"."SYS_IMPORT_FULL_01" successfully completed at Thu Sep 5 16:00:09 2019 elapsed 0 00:00:18


-bash-4.1$ sqlplus sh/sh@racnode2/pdb1


SQL> SELECT table_name, inmemory_compression, inmemory_priority , 
inmemory_distribute FROM user_tables where table_name='TEST_INMEMORY';

TABLE_NAME INMEMORY_COMPRESS INMEMORY INMEMORY_DISTRI
----------------- -------- ---------------
TEST_INMEMORY

SQL> drop table TEST_INMEMORY purge;

Table dropped.


#TRANSFORM=INMEMORY_CLAUSE:string Modify the properties of the IM export 
object about IM

-bash-4.1$ impdp sh/sh@racnode2/pdb1 directory=PDB1_DIR 
TRANSFORM=INMEMORY_CLAUSE:\"INMEMORY MEMCOMPRESS FOR DML\"

Import: Release 12.1.0.2.0 - Production on Thu Sep 5 16:14:01 2019

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Master table "SH"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SH"."SYS_IMPORT_FULL_01": sh/********@racnode2/pdb1 directory=PDB1_DIR TRANSFORM=INMEMORY_CLAUSE:"INMEMORY MEMCOMPRESS FOR DML"
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "SH"."TEST_INMEMORY" 3.968 MB 200000 rows
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Job "SH"."SYS_IMPORT_FULL_01" successfully completed at Thu Sep 5 16:14:19 2019 elapsed 0 00:00:17

-bash-4.1$ sqlplus sh/sh@racnode2/pdb1

SQL> SELECT table_name, inmemory_compression, inmemory_priority , inmemory_distribute FROM user_tables where table_name='TEST_INMEMORY';

TABLE_NAME INMEMORY_COMPRESS INMEMORY INMEMORY_DISTRI
----------------- --------------------------------- ---------------
TEST_INMEMORY FOR DML NONE AUTO

12.2 New Feature
“In-Memory FastStart (IM FastStart) reduces the time to populate data into 
the IM column store when a database instance restarts.”


[2] Automatic Big Table Caching

Automatic big table caching integrates queries with the buffer cache to 
enhance the in-memory query capabilities of Oracle Database, in both single 
instance and Oracle RAC environments.

The cache section reserved for the big table cache is used for caching data 
for table scans. While the big table cache is primarily designed to enhance 
performance for data warehouse workloads, it also improves performance in 
Oracle Database running mixed workloads.

Automatic big table caching uses temperature and object based algorithms 
to track medium and big tables. Oracle does cache very small tables, but 
automatic big table caching does not track these tables.

-> To enable automatic big table caching for serial queries, you must set 
a value (percentage) for the DB_BIG_TABLE_CACHE_PERCENT_TARGET initialization 
parameter.
-> Additionally, you must set the PARALLEL_DEGREE_POLICY initialization 
parameter to AUTO or ADAPTIVE to enable parallel queries to use automatic 
big table caching.
-> In Oracle RAC environments, automatic big table caching is only supported 
in parallel queries so both settings are required.

If a large table is approximately the size of the combined size of the big 
table cache of all instances, then the table is partitioned and cached, or 
mostly cached, on all instances. An in-memory query could eliminate most 
disk reads for queries on the table, or the database could intelligently 
read from disk only for that portion of the table that does not fit in the 
big table cache. If the big table cache cannot cache all the tables to be 
scanned, only the most frequently accessed table are cached, and the rest are 
read through direct read automatically.

The DB_BIG_TABLE_CACHE_PERCENT_TARGET parameter determines the percentage 
of the buffer cache size used for scans. If DB_BIG_TABLE_CACHE_PERCENT_TARGET 
is set to 80 (%), then 80 (%) of the buffer cache is used for scans and the 
remaining 20 (%) is used for OLTP workloads.

The default for DB_BIG_TABLE_CACHE_PERCENT_TARGET is 0 (disabled) and the 
upper limit is 90 (%) reserving at least 10% buffer cache for usage besides 
table scans. When the value is 0, in-memory queries run with 
the existing least recently used (LRU) mechanism. You can adjust the 
DB_BIG_TABLE_CACHE_PERCENT_TARGET parameter dynamically.

Use the following guidelines when setting the DB_BIG_TABLE_CACHE_PERCENT_TARGET parameter:

-> If you do not enable automatic DOP in an Oracle RAC environment, then 
you should not set this parameter because the big table cache section is 
not used in that situation.

-> When setting this parameter, you should consider the workload mix: 
how much of the workload is for OLTP; insert, update, and random access; 
and how much of the workload involves table scans. Because data warehouse 
workloads often perform large table scans, you may consider giving big table cache 
section a higher percentage of buffer cache space for data warehouses.

-> This parameter can be dynamically changed if the workload changes. 
The change could take some time depending on the current workload to reach 
the target, because buffer cache memory might be actively 
used at the time.

1. The default in Oracle for large tables is where any table is larger 
than 2% of the SGA size, it is considered a large table.

2. Also, beware that db_big_table_cache_percent_target does only appear 
to work with the default data buffer cache, so multiple block size buffers 
will not benefit from this feature. Also, remember that db_big_table_cache_percent_target 
only cache full-table scan data access (db file scattered reads) and this 
parameter will not be effective for tuning OLTP systems that access data 
via an index (e.g. high db file sequential reads).

-> The db_big_table_cache_percent_target parameter is exclusively for 
data warehouse (decision support systems) that perform exclusive 
large-table-full-table scans, and systems where you have hundreds of 
gigabytes of RAM for data caching.

Views:
V$BT_SCAN_CACHE
V$BT_SCAN_OBJ_TEMPS

Demo for db_big_table_cache_percent_target 
-------------------------------------------

SQL> show parameter db_big_table_cache_

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_big_table_cache_percent_target string 0
SQL>
SQL> create table test_big_table
2 (id number,name varchar2(25));

Table created.

SQL> insert into test_big_table
2
SQL> BEGIN
2 FOR I IN 1..1000000
3 LOOP
4 INSERT INTO TEST_BIG_TABLE
5 VALUES(I,'HITESH'||I);
6 COMMIT;
7 END LOOP;
8 END;
9 /

PL/SQL procedure successfully completed.

SQL> select count(*) from test_big_table;

COUNT(*)
----------
1000000

-bash-4.1$ sql
SQL> show pdbs

CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
4 PDB2 READ WRITE NO

SQL> select component, current_size/1024/1024 from v$memory_dynamic_components
2 where component = 'DEFAULT buffer cache';

COMPONENT CURRENT_SIZE/1024/1024
-------------------------------------------------
DEFAULT buffer cache 248


SQL> show parameter sga

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
lock_sga boolean FALSE
pre_page_sga boolean TRUE
sga_max_size big integer 768M
sga_target big integer 768M
unified_audit_sga_queue_size integer 1048576
SQL>
SQL> alter system set db_big_table_cache_percent_target=20;

System altered.

SQL> alter session set container=pdb1;

Session altered.

SQL> show parameter big_table

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_big_table_cache_percent_target string 20
SQL>

SQL> connect sh/sh@racnode2/pdb1
Connected.

SQL> select count(*) from test_big_table;

COUNT(*)
----------
1000000

SQL> select bt_cache_alloc,bt_cache_target from v$bt_scan_cache;

BT_CACHE_ALLOC BT_CACHE_TARGET
-------------- ---------------
.095753785 20

SQL> select object_count,memory_buf_alloc,min_cached_temp from v$bt_scan_cache;

OBJECT_COUNT MEMORY_BUF_ALLOC MIN_CACHED_TEMP
------------ ---------------- ---------------
1 2909 1000

SQL> select * from v$bt_scan_obj_temps;

TS# DATAOBJ# SIZE_IN_BLKS TEMPERATURE POLICY CACHED_IN_MEM CON_ID
------------------------ ---------- ------------ ----------- ---------- -------------
196611 92634 3268 2000 MEM_PART 2909 0

SQL> select object_name from dba_objects where object_id = 92634;

OBJECT_NAME
--------------------------------------------------------------------------------
TEST_BIG_TABLE

SQL> ANALYZE TABLE TEST_BIG_TABLE COMPUTE STATISTICS;

Table analyzed.

SQL> select table_name, blocks from dba_tables where table_name = 'TEST_BIG_TABLE';

TABLE_NAME BLOCKS
--------------------------
TEST_BIG_TABLE 3268


SQL> select count(*) from test_big_table;

COUNT(*)
----------
1000000

SQL> select * from v$bt_scan_cache;

BT_CACHE_ALLOC BT_CACHE_TARGET OBJECT_COUNT MEMORY_BUF_ALLOC MIN_CACHED_TEMP CON_ID
-------------- --------------- ------------ ---------------- ---------------------------
.09845293 20 1 2991 1000 0

SQL> select * from v$bt_scan_obj_temps;

TS# DATAOBJ# SIZE_IN_BLKS TEMPERATURE POLICY CACHED_IN_MEM CON_ID
---------- ---------- ------------ ----------- ---------- --------------------------
196611 92634 3268 3000 MEM_PART 2991 0

This temperature value helps oracle to prioritize tables in memory and 
identify which table is frequently queried. Based on this information 
oracle decides which table stays in memory and which table has to move out.

We have to remember currently we don’t have any option to move individual 
tables to the cache. It is completely automated and done by Oracle’s discretion. 
Our table may or may not be moved to this cache, but if you have big tables 
which you think may get benefited from this option then you can check this option.

[3] Force Full Database Caching Mode

An Oracle Database instance can cache the full database in the buffer cache.

6.5.1 About Force Full Database Caching Mode

In default caching mode, Oracle Database does not always cache the underlying 
data when a user queries a large table because doing so might remove more 
useful data from the buffer cache. Starting with Oracle Database 12c Release 1 
(12.1.0.2), if the Oracle Database instance determines that there is enough space 
to cache the full database in the buffer cache and that it would be beneficial 
to do so, then the instance automatically caches the full database in the buffer cache.

Caching the full database in the buffer cache might result in performance 
improvements.

You can force an instance to cache the database in the buffer cache using an 
ALTER DATABASE FORCE FULL DATABASE CACHING statement.

This statement puts the instance in force full database caching mode.
In this mode, Oracle Database assumes that the buffer cache is large enough 
to cache the full database and tries to cache all blocks that are accessed 
subsequently.

When an Oracle Database instance is in force full database caching mode, 
the following query returns YES:

select force_full_db_caching from v$database;

FOR
---
NO

-> When an instance is in default caching mode, NOCACHE LOBs are not cached 
in the buffer cache.
-> However, when an instance is in force full database caching mode, 
NOCACHE LOBs can be cached in the buffer cache.
-> Also, both LOBs that use SecureFiles LOB storage and LOBs that use 
BasicFiles LOB storage can be cached in the buffer cache in force full 
database caching mode only.

Note:
1. When an instance is put in force full database caching mode, database 
objects are not loaded into the buffer cache immediately. Instead, they 
are cached in the buffer cache when they are accessed.

2. In a multitenant environment, force full database caching mode applies 
to the entire multitenant container database (CDB), including all of its
 pluggable databases (PDBs).

3. Information about force full database caching mode is stored in the 
control file. If the control file is replaced or recreated, then the 
information about the force full database caching mode is lost. 
A restored control file might or might not include this information, 
depending on when the control file was backed up.

6.5.2 Before Enabling Force Full Database Caching Mode

The database must be at 12.0.0 or higher compatibility level to enable 
force full database caching mode for the database instance. In addition, 
ensure that the buffer cache is large enough to cache the entire 
database.

When a database is configured to use the SGA_TARGET or MEMORY_TARGET 
initialization parameter for automatic memory management, the size of 
the buffer cache might change depending on the workload. 
Run the following query to estimate the buffer cache size when the 
instance is under normal workload:

SELECT NAME, BYTES FROM V$SGAINFO WHERE NAME='Buffer Cache Size';

This query returns the buffer cache size for all possible block sizes. 
If your database uses multiple block sizes, then it is best to ensure that
the buffer cache size for each possible block size is bigger than the total 
database size for that block size.

You can determine the buffer cache size for non-default block sizes with 
the DB_nK_CACHE_SIZE parameter.
With SGA_TARGET or MEMORY_TARGET, the buffer cache size for the default 
block size in the default pool might change depending on the workload.

The following query returns the current buffer cache size for the default 
block size in the default pool:

SELECT COMPONENT, CURRENT_SIZE FROM V$SGA_DYNAMIC_COMPONENTS 
WHERE COMPONENT LIKE 'DEFAULT buffer cache';

If you are estimating memory requirements for running a database fully in 
the buffer cache, then you can estimate the size of the buffer cache as 
one of the following:

If you plan to use SGA_TARGET, then you can estimate the buffer cache size 
as 60% of SGA_TARGET.

If you plan to use MEMORY_TARGET, then you can estimate the SGA size as 60% 
of MEMORY_TARGET, and buffer cache size as 60% of SGA size. That is, you can 
estimate the buffer cache size as 36% of MEMORY_TARGET.

6.5.3 Enabling Force Full Database Caching Mode

Ensure that the database is mounted but not open.
ALTER DATABASE FORCE FULL DATABASE CACHING;
SELECT FORCE_FULL_DB_CACHING FROM V$DATABASE;
ALTER DATABASE OPEN;

6.5.4 Disabling Force Full Database Caching Mode

Ensure that the database is mounted but not open.

ALTER DATABASE NO FORCE FULL DATABASE CACHING;
SELECT FORCE_FULL_DB_CACHING FROM V$DATABASE;
ALTER DATABASE OPEN;

[4] Database Smart Flash Cache

6.6 Configuring Database Smart Flash Cache

The Database Smart Flash Cache feature is a transparent extension of the 
database buffer cache using solid state device (SSD) technology. Database 
Smart Flash Cache can greatly improve the performance of Oracle databases 
by reducing the amount of disk I/O at a much lower cost than adding an 
equivalent amount of RAM.

6.6.1 When to Configure Database Smart Flash Cache
You should consider configuring Database Smart Flash Cache when certain 
conditions are met.

Consider adding Database Smart Flash Cache when all of the following conditions 
are true:

1. Your database is running on the Solaris or Oracle Linux operating systems. 
Database Smart Flash Cache is supported on these operating systems only.

2. The Buffer Pool Advisory section of your Automatic Workload Repository 
(AWR) report or STATSPACK report indicates that doubling the size of the 
buffer cache would be beneficial.

3. db file sequential read is a top wait event.

4. You have spare CPU.

Note:
You cannot share one flash file among multiple instances. However, you can 
share a single flash device among multiple instances if you use a logical 
volume manager or similar tool to statically partition the flash device.

6.6.2 Sizing Database Smart Flash Cache
As a general rule, size Database Smart Flash Cache to be between 2 times 
and 10 times the size of the buffer cache.

Any multiplier less than two would not provide any benefit. If you are using 
automatic shared memory management, make Database Smart Flash Cache between 2 
times and 10 times the size of SGA_TARGET. Using 80% of the size of 
SGA_TARGET instead of the full size would also suffice for this calculation.

6.6.3 Tuning Memory for Database Smart Flash Cache

For each database block moved from the buffer cache to Database Smart Flash 
Cache, a small amount of metadata about the block is kept in the buffer cache.

6.6.4 Database Smart Flash Cache Initialization Parameters

DB_FLASH_CACHE_FILE = /dev/sda, /dev/sdb, /dev/sdc
-> Specifies a list of paths and file names for the files to contain Database 
Smart Flash Cache, in either the operating system file system or an Oracle 
Automatic Storage Management disk group.
-> A maximum of 16 files is supported.

DB_FLASH_CACHE_SIZE = 32G, 32G, 64G
-> Specifies the size of each file in your Database Smart Flash Cache. 
Each size corresponds with a file specified in DB_FLASH_CACHE_FILE.

-> Each size specification must be less than or equal to the physical memory 
size of its flash device. 

-> The size is expressed as nG, indicating the number of gigabytes (GB).

You can query the V$FLASHFILESTAT view to determine the cumulative latency 
and read counts of each file and compute the average latency.

Dynamically changing the size of Database Smart Flash Cache is not supported.
=======================
Patch to Documentation:
======================= 
Data Warehousing and Business Intelligence 
-> Database VLDB and Partitioning Guide 
-> Automatic Big Table Caching

Oracle Database Administrator’s Guide 
-> 6. Managing Memory 
-> 6.7 Using the In-Memory Column Store
-> 6.5 Using Force Full Database Caching Mode
-> 6.6 Configuring Database Smart Flash Cache
-> 6.7 Using the In-Memory Column Store

Database SQL Language Reference -> CREATE DIRECTORY

Thank you for visiting this blog 🙂