Letysite.ru

IT Новости с интернет пространства
0 просмотров
Рейтинг статьи
1 звезда2 звезды3 звезды4 звезды5 звезд
Загрузка...

Teradata locking row for access

Teradata-как выбрать без блокировки писателей? (Блокировка строки для доступа против блокировки таблицы для доступа)

Я разрабатываю приложение, которое извлекает некоторые данные из Teradata DWH. Разработчики DWH сказали мне использовать LOCK ROW FOR ACCESS перед всеми запросами SELECT , чтобы избежать задержки записи в эту таблицу(таблицы).

Будучи очень знакомым с подсказкой MS SQL Servers WITH(NOLOCK) , я вижу LOCK ROW FOR ACCESS как ее эквивалент. Однако операторы INSERT или UPDATE не позволяют использовать LOCK ROW FOR ACCESS (мне непонятно, почему это не удается, поскольку он должен применяться к таблице(таблицам), из которой выбирается оператор, а не к той, в которую я вставляю):

Я видел, что LOCKING TABLE . FOR ACCESS можно использовать, но неясно, соответствует ли он моей потребности ( эквивалент NOLOCK — не блокирует записи).

Вопрос: какую подсказку следует использовать для минимизации задержки записи при выборе в операторе INSERT ?

3 Ответа

Вы не можете использовать LOCK ROW FOR ACCESS в инструкции INSERT-SELECT. Оператор INSERT помещает блокировку WRITE на таблицу, в которую он записывает, и блокировку READ на таблицы, из которых он выбирает.

Если это абсолютно необходимо, чтобы вы получили LOCK ROW FOR ACCESS на INSERT-SELECT, а затем подумайте о создании представления типа:

А затем выполните свою вставку-выберите из представления:

Не прямой ответ, но я всегда понимал, что это одна из причин, по которой ваш users/applications/etc должен получать доступ к данным через представления. Блокировка представлений for access , которая не препятствует выбору inserts/updates. из таблицы, использует блокировку чтения, которая предотвращает inserts/updates.

Недостатком является блокировка доступа, возможность грязного чтения существует.

Измените свой запрос, как показано ниже, и вы должны быть хороши.

Похожие вопросы:

Пример выбора блокировки в режиме общего доступа взят из документации MySQL : В качестве примера ситуации, в которой чтение блокировки полезно, предположим, что вы хотите вставить новую строку в.

В настоящее время я создаю огромную базу данных с большим количеством транзакций. (Вставить, обновить, выбрать) с MySQL MyISAM И я рассматриваю возможность использования таблицы блокировки и.

На этой странице говорится, что для метода доступа Hash вам нужен только один объект блокировки. http://www.oracle.com/technology/documentation/berkeley-db/db/programmer_reference/lock_max.html.

В настоящее время мне интересно, почему несколько раз ADO.NET бросает непосредственно исключение блокировки таблицы, когда таблица заблокирована, и несколько раз выполненные операторы ждут, пока.

Я выполняю некоторые тесты для оценки фреймворков для нашего слоя DAO. Одним из тестовых случаев является репликация блокировки таблицы и проверка поведения платформы в такой ситуации. Эти тесты.

Я читал некоторые статьи о намерении заблокировать, что есть много умысел замки, которые с намерением совмещаемого доступа (IS), блокировка с намерением монопольного доступа (IX), общий с намерением.

В Sybase я могу указать схему блокировки для таблицы, если это так, строки данных, страницы или блокировка таблицы. Ниже приведен пример в SYBASE , как создать таблицу с указанием таблицы.

У меня есть общий метод доступа к кэшу внутри класса singleton, который принимает три параметра. делегат (тип возвращаемого значения void без параметров) метка кэша объект блокировки.

В последнее время я столкнулся с довольно неприятной ситуацией, когда сервер SQL отказывается выдавать блокировки только против первичного ключа, когда против него выполняется оператор, подобный.

Я читаю понимание блокировки в сервере SQL . Но я не совсем понимаю цель обновления блокировок. Детальное описание как показано ниже: Обновление Блокировок Блокировки обновления (U) предотвращают.

Data Analysis Example

A knowledge repository of the things I found interesting while working.

Pages

Tuesday, 18 December 2012

Locks in Teradata — how to effectively do Insert and Select on the same table:

There are 4 kind of locks in Teradata:

Exclusive — Exclusive locks are placed when a DDL is fired on the database or table, meaning that the Database object is undergoing structural changes. Concurrent accesses will be blocked.

Write — A Write lock is placed during a DML operation. INSERT, UPDATE and DELETE will trigger a write lock. It may allow users to fire SELECT queries. But, data consistency will not be ensured.

Compatibility: Access Lock — Users not concerned with data consistency. The underlying data may change and the user may get a «Dirty Read»

Read — This lock happens due to a SELECT access. A Read lock is not compatible with Exclusive or Write locks.

Compatibility: Supports other Read locks and Access Locks

Access — When a user uses the LOCKING FOR ACCESS phrase. An Access lock allows users to read a database object that is already under write-lock or read-lock. An access lock does not support Exclusive locks . An access lock does not ensure Data Integrity and may lead to «Stale Read»

Categorised on Levels, we can have locks at Database, Table or Row-level.

Row Hash Lock: A Row Hash lock is a 1-AMP operation where the Primary Index is utilized in the WHERE clause of the query.

How it helps: Rather than locking the entire table, Teradata locks only those rows that have the same Row Hash value as generated in the WHERE clause.

Syntax: Locking Row for Access

Practical Example:

This is the situation faced today:

SQL1: Insert into FACT_CUST (col1,col2) Select(col1,col2) from WRK_CUST

SQL2: Select * from FACT_CUST

Since, SQL1 was submitted first, there is a Write lock on FACT_CUST. SQL2 needs a Read Lock on FACT_CUST. So, it will wait until SQL1 is complete.

When inserting or updating rows from a query (Insert/ Select or Update where the primary index is not specified), a Write lock is placed on the table.

If you Insert Values or Update where PIVal = 12345 (via SQL or Load Utility) then a Write lock is placed at the row level.

Читать еще:  Носитель отсутствует powerpoint видео

If you do a select without specifying the PI values, a Read lock is placed at table level. A Read Lock will not read through an existing write lock (and vice versa) so the one who gets in second will be delayed or bounced if NOWAIT is specified.

If you put a LOCKING ROW (or Tablename) FOR ACCESS on the Select query, the Select will read through any Write lock at row or table level. (So-called «Dirty Read».) This only applies to Select — a Write lock cannot be downgraded to an Access Lock.

To overcome «Stale Read», we can allow read access through Views — put the LOCKING FOR ACCESS clause in the views.

Let me explain the strategy I use:

I have a view that is refreshed everyday.
View definition:

replace view test_v
as
locking row for access
(
select * from test1
);

Now I load the delta rows into test2. Then once the processing completes and test2 table is ready, refresh the view as:

replace view test_v
as
locking row for access
(
select * from test2
);

Next step will be to move the contents of test2 into test1:

delete from test1 all;
insert into test1 select * from test2;

This will always give consistent data and very little downtime (required only during view refresh)

Data Analysis Example

A knowledge repository of the things I found interesting while working.

Pages

Tuesday, 18 December 2012

Locks in Teradata — how to effectively do Insert and Select on the same table:

There are 4 kind of locks in Teradata:

Exclusive — Exclusive locks are placed when a DDL is fired on the database or table, meaning that the Database object is undergoing structural changes. Concurrent accesses will be blocked.

Write — A Write lock is placed during a DML operation. INSERT, UPDATE and DELETE will trigger a write lock. It may allow users to fire SELECT queries. But, data consistency will not be ensured.

Compatibility: Access Lock — Users not concerned with data consistency. The underlying data may change and the user may get a «Dirty Read»

Read — This lock happens due to a SELECT access. A Read lock is not compatible with Exclusive or Write locks.

Compatibility: Supports other Read locks and Access Locks

Access — When a user uses the LOCKING FOR ACCESS phrase. An Access lock allows users to read a database object that is already under write-lock or read-lock. An access lock does not support Exclusive locks . An access lock does not ensure Data Integrity and may lead to «Stale Read»

Categorised on Levels, we can have locks at Database, Table or Row-level.

Row Hash Lock: A Row Hash lock is a 1-AMP operation where the Primary Index is utilized in the WHERE clause of the query.

How it helps: Rather than locking the entire table, Teradata locks only those rows that have the same Row Hash value as generated in the WHERE clause.

Syntax: Locking Row for Access

Practical Example:

This is the situation faced today:

SQL1: Insert into FACT_CUST (col1,col2) Select(col1,col2) from WRK_CUST

SQL2: Select * from FACT_CUST

Since, SQL1 was submitted first, there is a Write lock on FACT_CUST. SQL2 needs a Read Lock on FACT_CUST. So, it will wait until SQL1 is complete.

When inserting or updating rows from a query (Insert/ Select or Update where the primary index is not specified), a Write lock is placed on the table.

If you Insert Values or Update where PIVal = 12345 (via SQL or Load Utility) then a Write lock is placed at the row level.

If you do a select without specifying the PI values, a Read lock is placed at table level. A Read Lock will not read through an existing write lock (and vice versa) so the one who gets in second will be delayed or bounced if NOWAIT is specified.

If you put a LOCKING ROW (or Tablename) FOR ACCESS on the Select query, the Select will read through any Write lock at row or table level. (So-called «Dirty Read».) This only applies to Select — a Write lock cannot be downgraded to an Access Lock.

To overcome «Stale Read», we can allow read access through Views — put the LOCKING FOR ACCESS clause in the views.

Let me explain the strategy I use:

I have a view that is refreshed everyday.
View definition:

replace view test_v
as
locking row for access
(
select * from test1
);

Now I load the delta rows into test2. Then once the processing completes and test2 table is ready, refresh the view as:

replace view test_v
as
locking row for access
(
select * from test2
);

Next step will be to move the contents of test2 into test1:

delete from test1 all;
insert into test1 select * from test2;

This will always give consistent data and very little downtime (required only during view refresh)

Teradata locking row for access

Data Concurrency and Consistency

Row Share Table Locks (RS)

Permitted Operations
Prohibited Operations

A row share table lock (also sometimes called a subshare table lock, SS) indicates that the transaction holding the lock on the table has locked rows in the table and intends to update them. A row share table lock is automatically acquired for a table when one of the following SQL statements is run:

SELECT . FROM table . FOR UPDATE OF . ;

LOCK TABLE table IN ROW SHARE MODE;

Читать еще:  Расширенный фильтр в access

A row share table lock is the least restrictive mode of table lock, offering the highest degree of concurrency for a table.

Permitted Operations: A row share table lock held by a transaction allows other transactions to query, insert, update, delete, or lock rows concurrently in the same table. Therefore, other transactions can obtain simultaneous row share, row exclusive, share, and share row exclusive table locks for the same table.

Prohibited Operations:
A row share table lock held by a transaction prevents other transactions from exclusive write access to the same table using only the following statement:

LOCK TABLE table IN EXCLUSIVE MODE;

create table value_types (
value_type int not null primary key,
limit_min int not null,
limit_max int not null);

create table value_data (
value_id int not null primary key,
value_type int not null references value_types,
value int not null);

create or replace trigger value_types_update
before update on value_types
for each row
declare error_count int;
begin
select count (*) into error_count
from value_data
where value_type = :new.value_type
and not value between :new.limit_min and :new.limit_max;
if error_count > 0 then
raise_application_error (-20001,
‘The existing values don»t fit the new limits’);
end if;
end;
/
create or replace trigger values_insert_or_update

before insert or update on value_data
for each row
declare error_count int;
begin
select count (*) into error_count
from value_types
where value_type = :new.value_type
and not :new.value between limit_min and limit_max;
if error_count > 0 then
raise_application_error (-20002,
‘The new values don»t fit the limits’);
end if;
end;
/

insert into value_types values (1, 1, 10);
commit;

insert into value_data values (1, 1, 0) /* error */;
insert into value_data values (2, 1, 11) /* error */;
insert into value_data values (3, 1, 5) /* OK */;

update value_types set limit_min = 6 where value_type = 1 /* error */;
update value_types set limit_max = 4 where value_type = 1 /* error */;
update value_types set limit_max = 5 where value_type = 1 /* OK */;
rollback;

insert into value_data values (4, 1, 5) /* OK */;
— do not commit yet!

update value_types set limit_max = 4 where value_type = 1
commit;

select * from value_types;
select * from value_data;
/

How To Use Teradata Load Isolation

By Roland Wenzlofsky

access lock, exclusive lock, load isolation, read lock, transaction, write lock

Introduction to Teradata Load Isolation

Transaction management takes care that as many transactions as possible can access (SELECT) and change (UPDATE, INSERT, DELETE) the same database object concurrently. By serializing the access and change operations on the same database object, data integrity is ensured.

Consider the following scenarios:

  • Transaction A wants to read from a table
  • Transaction B wants to update rows in the same table

Depending which transaction is served as the first, one of below serial actions will take place:

  1. A gets the read lock on the table, B has to wait until A finishes retrieving the rows or
  2. B gets the write lock; A has to wait until the update finishes

Before Teradata 15.10, there is already a method for a transaction to read from a table while a concurrent change operation takes place: the so-called dirty read; the dirty read allows to use uncommitted rows.

This feature would mean for our previous mentioned scenario, that A and B can take place at the same time, but A might retrieve uncommitted rows. It will definitely happen if the changing transaction B is rolled back after transaction A was reading the rows. The dirty read is implemented in Teradata with the LOCKING FOR ACCESS modifier:

LOCKING TheTable FOR ACCESS
SELECT * FROM TheTable WHERE columnA = 10;

Teradata implements the access modifier on table level and on row level which locks only one row or a small set of rows. Here is an example of the syntax for row level locking:

LOCKING ROW FOR ACCESS
SEL * FROM TheTable
WHERE columnA = 10;

Keep in mind that Teradata transaction management may choose to upgrade the lock level if required (such as from row level lock to table level lock).

Access locking allows for a higher performance and should be used whenever possible: Read operations don’t have to wait for the change operations to finish and vice versa. Unfortunately, this approach is not always applicable.

It can be ok to use access locking for highly aggregated reports but may be inadequate for frequently accessed tables which need 100% integrity all the time.

Load isolation was introduced with Teradata 15.10 and gives us the possibility to use committed rows while another transaction is changing them at the same time. Load isolation adds data integrity to the functionality the LOCKING FOR ACCESS modifier already provides.

The main reason to use load isolation is, of course, the same we had for dirty reads: Performance improvement by increasing the workload concurrency.

Teradata implements load isolation on table level, and it can be used in your queries by adding the new LOAD COMMITTED modifier. In contrast, “locking for access” is applied only on query level (in your SQL statement or view definition).

Several features were added to support load isolation:

  • A new attribute is stored in the table header which identifies a table capable of load isolation
  • All table rows are versioned, using the internal “load id” column to be able to find the different row versions uniquely
  • Each table with load isolation keeps the last committed “load id” values in the data dictionary

The reading of committed row versions only is done by the system, by using the “last committed load id” which is stored in the data dictionary. INSERT, UPDATE and DELETE statements are implemented in the following way:

  • INSERT: The new row is added with the new “load id.”
  • UPDATE: A new version of the row is created (and identified by the new “load id”)
  • DELETE: The row is not physically deleted, only the “load id” of the DELETE statement is added (and kept together with the “load id” of the original INSERT). These logically deleted rows are not removed automatically from the table, but a new ALTER TABLE feature was introduced to execute physical deletion manually.
Читать еще:  Access treeview примеры

Load isolation is only available for regular tables. VOLATILE, ERROR, QUEUE, TEMPORARY or GLOBAL TEMPORARY TRACE tables cannot be defined as load isolated tables.

Furthermore, the following restrictions exist:

  • Column partition tables cannot be load isolated.
  • Hash indexes are not available for load isolated tables.
  • A compressed join index is not allowed on a load isolated table.
  • Permanent journaling is not available for load isolated tables.
  • Load isolated tables cannot be part of a replication group.

Teradata Load Isolation and Indexing

Of course, load isolation can be (or is automatically) inherited by the table’s secondary indexes and join indexes, allowing concurrent index reads of committed rows.

The USI of a load isolated table always carries the commit property of the base table row in the index row. Following two statements can be used to add the USI:

CREATE UNIQUE INDEX (PK) ON TheTable;
CREATE UNIQUE INDEX (PK) WITH LOAD IDENTITY ON TheTable;

In both cases, a concurrent USI read can continue to get committed data via a single USI row access (no base table access needed).

If the NUSI of a load isolated table carries the commit property for the base table ROWIDs, the index alone can be used to cover the query. Otherwise, the execution plan involves access to the base table. Behaviour is defined by below syntax:

CREATE INDEX (PK) WITH LOAD IDENTITY ON TheTable; ON TheTable; -> Base table access can be avoided if NUSI is covering
CREATE INDEX (PK) WITH NO LOAD IDENTITY ON TheTable; -> Always requires a base table access

As Join indexes are quite similar to regular permanent tables, any join index defined on the top of a regular table becomes a load isolated join index automatically. No special syntax for the join index DDL is required.

Usage of Load Isolated Tables

As load isolated tables need more permanent table space because:

  • Versions of each row are stored and
  • The necessary other information (load ids) consumes eight bytes per row,

Load isolation can be turned off temporarily to avoid versioning during huge change operation.

In this case, exclusive locks have to be applied to avoid that readers who are using the LOAD COMMITTED modifier are reading uncommitted rows.

Load isolation is turned on during table creation:

CREATE TABLE TheTable, WITH CONCURRENT ISOLATED LOADING FOR ALL
(
PK INTEGER NOT NULL,
columnA INTEGER
) UNIQUE PRIMARY INDEX (PK);

The FOR ALL level allows INSERT, DELETE and UPDATE operations to be load isolated.

If it’s ensured that mainly INSERT operations occur on the load isolated table, the FOR INSERT level is the right choice as no logically deleted rows are kept and no versions of updated rows. Read performance is better when compared against fully load isolated tables.

If you want to turn off load isolation temporarily, the FOR NONE level can be used in the ALTER TABLE statement. Any permanent table can be changed from load isolated to regular and vice versa.

After the load isolated table was defined, it can be accessed by using the LOAD COMMITTED modifier:

CREATE VIEW TheTableV AS LOCKING TheTable FOR LOAD COMMITTED
SELECT * FROM TheTable WHERE columnA=10;

Read operations on this view will only return the last committed rows (which are identified by the system accessing the data dictionary).

If wished, the most recent – but maybe uncommitted – rows can be accessed from the load isolated table TheTable, by using the LOCKING FOR ACCESS modifier:

LOCKING TheTable FOR ACCESS
SELECT * FROM TheTable WHERE columnA=10;

Load Isolation Administrative Tasks

The system does not automatically delete logically deleted rows. A newly introduced ALTER TABLE statement has to be applied:

ALTER TABLE TheTable RELEASE DELETED ROWS;

Keep in mind, that this statement requires an exclusive lock on the load isolated table. Additionally, statistics should be refreshed after the cleanup.

Summary

Load isolation allows increasing concurrency by keeping committed versions of the table rows and are a huge step forward in Teradata transaction management (although other database vendors are using the “committed read” approach already long before Teradata introduced this feature).

I like the possibility to define load isolation on a “per table” level and the opportunity to turn off load isolations for certain operations (which might heavily affect performance and disk space).

I don’t like the lack of an automatic garbage collection. Monitoring the load isolated tables for deleted logical rows and table space consumed by these rows seems an entirely unneeded administration task. It is even worse: while permanent tables can quite comfortable be administrated with simple ALTER TABLE statements, join indexes have to be cleaned by the usage of stored procedures.
I hope we will see some improvement here in the future

Ссылка на основную публикацию
Adblock
detector