If the table containing the changes can have MORE then one occurrence of the "primary key" of the other table -- no chance for a single statement. There are probably an infinite number of ways to do it, that was just the first that popped into mind.
If you tried: table t1( x int primary key, y int ); table t2( x int, y int ); insert into t1 values ( 1, 0 ); insert into t2 values ( 1, 100 ); insert into t2 values ( 1, 200 ); then update ( select t1.y t1_y, t2.y t2_y from t1, t2 where t1.x = t2.x ) set t1_y = t2_y would be "ambigous" -- no way we could know if y would end up with 100 or 200 -- hence we don't even permit it. Hi Tom, As you said in my case scenario I cannot use UPSERT(MERGE) and i have to write a pl/sql in case to achieve - 1.) Insert /Update from temporary table to actual table. make sure all 4 columns are the SAME TYPE of course.
Note - I didn't design this system, but I have to work with it. Elapsed: .22 that won't "hang the computer" for very long.. update ( select a.mid_table_id a_mtid, b.mid_table_id b_mtid from big_table a, mid_table b where a.join_col = b.join_col ) set a_mtid = b_mtid; that assumes (naturally) that a primary/unique constraint exists on mid_table(join_col) -- if not, add one -- it *must be true* or the update would be ambigous (if many rows can exist in B, which one to use? Tom we have a table with the following structure : inv_id addr_1 addr_2 addr_3 addr_4 1 xxx null null yyy 2 null xxx null yyy 3 null null xxx yyy 4 null null null xxx 5 xxx null yyy null 6 xxx null yyy zzz output should be : inv_id addr_1 addr_2 addr_3 addr_4 1 xxx yyy null null 2 xxx yyy null null 3 xxx yyy null null 4 xxx null null null 5 xxx yyy null null 6 xxx yyy zzz null all Null values should move into one side, and all Not Null values should move into one side. Thanks dmv Also if your followup includes an example you want me to look at, I'll need it to have a create table, insert into statements and such that I can easily cut and paste into sqlplus myself (like I give you) in order to play with. update 2 ( 3 select id, olda, oldb, oldc, oldd, 4 a, 5 b, 6 decode(shift,0,c,d) c, 7 decode(shift,0,d,null) d 8 from ( 9 select t.*, decode(c,null,1,0) shift 10 from ( 11 select id, olda, oldb, oldc, oldd, 12 a, 13 decode(shift,0,b,1,c,2,d) b, 14 decode(shift,0,c,1,d) c, 15 decode(shift,0,d,null) d 16 from ( 17 select t.*, decode(b,null,decode(c,null,2,1),0) shift 18 from ( 19 select id, olda, oldb, oldc, oldd, 20 decode(shift,0,a,1,b,2,c,3,d) a, 21 decode(shift,0,b,1,c,2,d) b, 22 decode(shift,0,c,1,d) c, 23 decode(shift,0,d,1,null) d 24 from ( 25 select a olda, b oldb, c oldc, d oldd, t.*, decode(a,null,decode(b,null,decode(c,null,3,2),1),0) shift 26 from t 27 ) 28 ) t 29 ) 30 ) t 31 ) 32 ) 33 set olda = a, oldb = b, oldc = c, oldd = d 34 / 6 rows updated.
But the 2nd Where clause simply return the message of `more than one row is return', since the id is unpredictable and this create a `many to many' relationship in both tables. Many Thanks, (script) REM* the where-clause of the update cannot work UPDATE table b SET column_b1 = ( SELECT MAX(column_a1) FROM table_a a, table_b b WHERE a.id=GROUP BY a.id) WHERE table_IN (SELECT MIN(id) FROM table_a GROUP BY id); Your example is somewhat confusing -- you ask "update column a1 in table a where data in column b1 in table b" but your update shows you updating column b1 in table B with some data from table a. Every month the client office is to give data(NEW & EDITED) "BY DATE RANGWISE" to the headoffice in CD. Now, you "two step" it: insert into gtt select b.id, count(*) cnt from tabb b, taba a where = and a.cycle = b.cycle and b.site_id = 44 and b.rel_cd in ( 'code1', 'code2', 'code3' ) and b.groupid = '123' and is null group by / that gets all of the id/cnts for only the rows of interest.Additionally -- given the way the where and set clauses are CODED in the above -- it would succeed. The Headoffice is merge the data into their system. For migration data first of all i create another temporary user named VISTEMP then cotinuing this kinds of code insert into VISTEMP. Now we can update the join: update ( select a.pop, from taba a, gtt b where = ) set pop = cnt / and thats it. Hi Tom, Im selecting approximately 1 million records from some tables and populating another set of tables.Oracle will return ORA-01031 (insufficient privileges). Hi Tom, I have a related question for an UPDATE - that takes unexpected long time. You said to use Oracle9I Merge statement when inserting or updating the data from a stagging table to history table. I this updation I want to find the equalent date/day in previous year corresponding to current year date and update the amount.I am using Table T1 to insert or update the data inthe Table T2 (which has a unique key column -c1 c2) in oracle 8.1.7. COM merge into t1 2 using t2 3 on ( t2.object_id = t1.object_id ) 4 when matched then 5 update set t1.object_name = t2.object_name 6 when not matched then 7 insert (object_id, object_name) values( t2.object_id, t2.object_name); 29317 rows merged. In my case Table T1 (staging table with 300 columns) will have 200 thousand rows every day for insert or update to the history table T2 ( 280 columns) that has 10 Million historical rows. For this I am using another table to find previous year dates for current year dates.2.) All rows failed during INSERT/u Pdate log them to a Fault table 3.) Find out number of rows inserted verses update. Ok, this is the reall query: UPDATE STG_CLAIM_TRY A SET A. ELIGIBILITY_KEY FROM STG_F_ELIGIBILITY_TRY B WHERE A. ) A failure in the middle of an append into a table -- harmless, the temporary extents we were writing to just get cleaned up.
I am using the PL/SQL to handle this, the Inserts are fine but the Updates are very slow. The test PL/SQL is as follows - declare todate date; current_mode char(10) :='load'; exec_code varchar2(200); exec_message varchar2(2000); this_total number :=0; this_succ number:=0; this_update number:=0; this_fail number:=0; this_insert number:=0; this_start char(10) ; this_end char(10); this_table_name varchar2(10) :='test'; cursor c is select * from test_temp ; commit_point number :=0; test_case number := null; fault_id varchar2(30); begin select to_char(sysdate, 'hh24:mm:ss') into this_start from dual; delete from test_load_result where table_name =upper(trim(this_table_name)); commit; for xc1 in c loop this_total:=this_total 1; commit_point:=commit_point 1; if commit_point 10000 then commit; commit_point:=0; end if; declare begin test_case := xc1.test_jdate; insert /* append */ into test( test_id, test_code ,test_case ,test_month ,test_jdate , sequence_no , check_status ) values ( trim(xc1.test_id), trim(xc1.test_code), to_date( to_char(to_date('','mm-dd-yyyy') ,'j') abs(test_case),'j'), upper(to_char((to_date( to_char(to_date('','mm-dd-yyyy') ,'j') abs(test_case),'j')),'mon')), trim(xc1.test_jdate), trim(xc1.sequence_no), trim(xc1.check_status) ); this_succ:=this_succ 1; exception when others then if sqlcode = -1 then declare begin update test set check_status = trim(xc1.check_status) where test_id = trim(xc1.test_id) and test_code=trim(xc1.test_code) and test_jdate = trim(xc1.test_jdate) and sequence_no =trim(xc1.sequence_no) ; this_update := this_update 1; exception when others then exec_code := sqlcode; exec_message := sqlerrm; select sysdate into todate from dual; if (this_update 0) then this_update := this_update - 1; end if; this_fail:=this_fail 1; fault_id:= trim(xc1.rms_index); insert into test_fault select (select sysdate from dual), st.* from test_temp st where trim(rms_index)=trim(fault_id); insert into test_exception( exec_date, exec_mode, table_name,rms_index, exec_code, exec_message) values (todate,current_mode,this_table_name,fault_id,exec_code,exec_message); end; else exec_code := sqlcode; exec_message := sqlerrm; select sysdate into todate from dual; this_fail:=this_fail 1; fault_id:= trim(xc1.rms_index); insert into test_fault select (select sysdate from dual), st.* from test_temp st where trim(rms_index)=trim(fault_id); insert into test_exception( exec_date, exec_mode, table_name,rms_index, exec_code, exec_message) values (todate,current_mode,this_table_name,fault_id,exec_code,exec_message); end if; end; end loop; this_succ := this_succ this_update; select decode(sign(this_succ -(this_update this_fail)),1,this_succ -(this_update this_fail),0) into this_insert from dual; select to_char(sysdate, 'hh24:mi:ss') into this_end from dual; test_results(todate,this_start, this_end, this_table_name, this_total, this_succ,this_insert, this_update, this_fail); commit; end; / November 08, 2002 - pm UTC you do realize that /* append */ hint is just a waste of keystrokes right? nologging on an index only affects things like: o create (no existing data) o rebuild (no existing data is touched) see Tom, Thanks for the Clarification of NOLOGGING.--For incremental/New data----- insert into A select * from B where column_name NOT IN (select column_name from B); --For Edited Data------- cursore C_AB select * from A minus select * from A For R in C_AB loop Update A set....where ... ****************************************************** to finish. Im with you on the fact that this update should take no more then few minutes but its not :-) . DATE_SERVICE, 'YYYYMMDD')) May 24, 2004 - pm UTC search this site for ora-01555 also ask yourself, so, what happens when we crash in the middle of the loop. For this I have something called the actual Value which is stored in the variable vnum_actual Value.end loop End; It's working but taking a huge time/sometimes hang the computer. to update 10,000 rows in a 100,000 row table should take seconds (it'll be a direct function of the number of indexes). Is there any other information that I can provide you with to help shade some light on this pain in the neck update? I changed the global temporary table to index organized table, the insert takes minutes and the update never finish (its still running now for about 30 minutes already). DATE_SERVICE FROM STG_CLAIM_TRY A, STG_F_ELIGIBILITY_TRY B WHERE A. can I restart that process or did the programmer not even begin to think about that eventuality? June 15, 2004 - pm UTC 1) b must have a primary key, yes. where did i show merge being faster than a single update? This value in the variable vnum_actual Value should be distributed among the appliedvalue column as follows.I have a table named A containing say 100000 records. Analyze, use the CBO and look for nice big juicy HASH JOINS Hi Tom Thankyou very much for your query. DATE_SERVICE May 12, 2004 - pm UTC my concept now, that the join is "fast" is to use a global temporary table with a primary key - insert the results of the select join into it and update the join (which we can do since the gtt will have a proper primary key on it) Tom, I tried that but I think I'm doing something wrong. There are no indexes or constraints on STG_CLAIM_TRY.I have another table B containg 10000 records of incremented and edited records of A table. I am using the following codes to append data from B to A. consider: [email protected] merge into big_table B 2 using small_table S 3 on ( = ) 4 when matched then update set 5 b.owner = s.owner, b.object_name = s.object_name, 6 b.subobject_name = s.subobject_name, b.object_id = s.object_id, 7 b.data_object_id = s.data_object_id, b.object_type = s.object_type, 8 b.created = s.created, b.last_ddl_time = s.last_ddl_time, 9 b.timestamp = s.timestamp, b.status = s.status, b.temporary = s.temporary, 10 b.generated = s.generated, b.secondary = s.secondary 11 when not matched then insert 12 ( ID, OWNER, OBJECT_NAME, SUBOBJECT_NAME, OBJECT_ID, DATA_OBJECT_ID, 13 OBJECT_TYPE, CREATED, LAST_DDL_TIME, TIMESTAMP, STATUS, TEMPORARY, 14 GENERATED, SECONDARY ) 15 values 16 ( s. Surely next time, i will follow your instructions regarding create and insert statements, which helps you to answer quickly. Regards dmv Tom, I am using the cost based optimizer, I followed your suggestion and analyzed the 2 tables, this is what I get now. ELIGIBILITY_KEY FROM STG_F_ELIGIBILITY_TRY B, STG_CLAIM_TRY a WHERE A. The updating session is the only session in the db. create table testupdate (sno number(4), itemcode varchar2(8), value number(4), appliedvalue number(4)) insert into testupdate (sno,itemcode,value) values(1,'item1',200); insert into testupdate (sno,itemcode,value) values(2,'item2',100); insert into testupdate (sno,itemcode,value) values(3,'item3',300); insert into testupdate (sno,itemcode,value) values(4,'item4',200); insert into testupdate (sno,itemcode,value) values(5,'item5',50); insert into testupdate (sno,itemcode,value) values(6,'item6',200); insert into testupdate (sno,itemcode,value) values(7,'item7',400); SNO ITEMCODE VALUE APPLIEDVALUE -------- -------- ---------- ------------ 1 item1 200 2 item2 100 3 item3 300 4 item4 200 5 item5 50 6 item6 200 7 item7 400 Now: I'm writing a stored procedure in which I have to update the appliedvalue column of the above table.It isn't doing anything beyond confusing the reader of your code.... If there is not match/join then STG_CLAIM_TRY should be null. I changed the index a little so the explain plan is a little different then before, the index on STG_F_ELIGIBILITY_TRY have CLIENT_MEMBER_ID, DATE_EFFECTIVE, DATE_TERMINATION.