Blob Field With Compression Data Is Not Valid
I am trying to see from an SQL console what is inside an Oracle BLOB.I know it contains a somewhat large body of text and I want to just see the text, but the following query only indicates that there is a BLOB in that field: select BLOBFIELD from TABLEWITHBLOB where ID = ';the result I'm getting is not quite what I expected:BLOBFIELD-oracle.sql.BLOB@1c4ada9So what kind of magic incantations can I do to turn the BLOB into it's textual representation?PS: I am just trying to look at the content of the BLOB from an SQL console (Eclipse Data Tools), not use it in code. First of all, you may want to store text in CLOB/NCLOB columns instead of BLOB, which is designed for binary data (your query would work with a CLOB, by the way).The following query will let you see the first 32767 characters (at most) of the text inside the blob, provided all the character sets are compatible (original CS of the text stored in the BLOB, CS of the database used for VARCHAR2): select utlraw.casttovarchar2(dbmslob.substr(BLOBFIELD)) from TABLEWITHBLOB where ID = '. In case your text is compressed inside the blob using DEFLATE algorithm and it's quite large, you can use this function to read it CREATE OR REPLACE PACKAGE readgzippedentitypackage ASFUNCTION readentity(entityid IN VARCHAR2)RETURN VARCHAR2;END readgzippedentitypackage;/CREATE OR REPLACE PACKAGE BODY readgzippedentitypackage ISFUNCTION readentity(entityid IN VARCHAR2) RETURN VARCHAR2ISlblob BLOB;lbloblength NUMBER;lamount BINARYINTEGER:= 10000; - must be.
Hello,I have a table with BLOBs and they are actually character logs generated by fax transmission software. They are about 4k-16k big and pretty redundant, they compress with zlib to just about 1k-2k. And I want to store them compressed in DB. Now question what is better in your opinion?To use server side compressor lib in on insert triggers for storing (they will never be updated) and decompressing views? Or maybe you would code compressor/decompressor logic on the client side.I wrote compressor/decompressor as Java SP and use first approach thus there have to be no compressor specific client side code (Perl, Java and C) but the drawback is that there is more network traffic than necessary.Any comments?Thank you in advanceRegards,Piotrand we said. I would do the compression in the database - easier to manage and the amount of network traffic probably won't be an issue (they trickle in all day long right.)In 10g, the package utlcompress will be of keen interest to you to replace all of the custom code you have.make sure to enable inline row storage - else the lobs will be stored on a block all by itself (eg: a 1k lob stored out of line will consume at least ONE block all by itself, stored inline, in the row, it'll just take its size)and you rated our response.
Tom, I have a new issue,We have an application (C#) that store JPG images on 9iR2, this images are documents pages where a document can have 1.N pages. Thanks to Piotr Jarmuz for sample code above. Using the functions to decompress LOBs on-the-fly in views, do we need to free the temporary LOBs created by the decompression functions? For example:We have an underlying table containing a compressed BLOB. Data is inserted via a procedure call from a Java-based client, similar to:-tempBlob = BLOB.createTemporary( conn, true, BLOB.DURATIONSESSION );tempBlob.open( BLOB.MODEREADWRITE );OutputStream os = tempBlob.getBinaryOutputStream.tempBlob.close;cs = (CallableStatement) conn.prepareCall( 'begin LobUtilitypkg.insertFile(?,?
Just to be sure I understand which LOB needs freeing:1) The insertion calls a packaged procedure, passing in a temporary LOB. This LOB is created and closed by the client, then bound to a callable statement. Calls to freeTemporary on this side cause ORA-22922: nonexistent LOB value ( maybe because the LOB was already closed?
)2) The view encapsulating the decompression function returns a temporary blob via SQL. Clients could:beginfor rec in (select fileid, filecontent from v)loopif dbmslob.istemporary( rec.filecontent ) = 1 thendbmslob.freetemporary( rec.filecontent );end if;end loop;end;You're suggesting that we need to close the temporary blobs returned to us via the view, as in example # 2 above, correct?
The following code will help you to create a table called 'emp' in oracle database with three fields namely id, name, photo. The field photo is a blob field for which you are going to insert an image as the value using the code. The image size may be anything even in GB as the blob field in oracle can store upto 4GB of size. When working with blob data in SQL Server, the amount of data per record is. Any binary data, such as images, office documents, compressed data, etc. To store blob data in these fields, you specify the MAX field size. Part of the reason these new data types have become so popular so quickly is because they do not.
Anything to worry about on the insertion side (example # 1)? I am trying to create a trigger that willautomatically compress blobs while they are insertedin the database (I use 10g). In: there is an example.'
Example: Modifying LOB Columns with a Trigger'.it clearly states that 'Formerly, you could examine LOBcolumns within a trigger body, but not modify them. Now,you can treat them the same as other columns'.I have had ABSOLUTELY no luck modifing a blob value whileit is inserted in the db. No errors are generated, thevalue just won't change! FollowupAugust 05, 2005 - 11:24 am UTCnot going to happen.think about how blobs are typically placed into the database:insert into table (., blobcol ) values (., emptyblob );Now, that is when the trigger fired, client gets the empty lob back and then 'streams' the data into it.That is, the LOB data.isn't there. when you insert, it comes LATER. SQLLDR for sure does it like that.you'll need to come up with another approach, the trigger isn't going 'to happen'.
You could use the trigger to schedule a JOB to run after the row is committed or you could run a procedure AFTER loading the data (you do have a 'iscompressed' flag in this table right.). FollowupOctober 10, 2006 - 8:07 am UTCT1 and T2 are the names of table segements.lobs over 4000 bytes are stored out of line in their OWN segments.You measured the table and the table was not 'compressed'ops$tkyte%ORA10GR2 select segmentname from usersegments;no rows selectedops$tkyte%ORA10GR2 create table t ( x clob );Table created.ops$tkyte%ORA10GR2 select segmentname from usersegments;SEGMENTNAME-TSYSILC00001$$SYSLOBC00001$$.
Sorry, I'm still not clear. I see examples in this and other threads using dbmslob.freetemporary to free temporary lobs that are explicitly created and assigned to a variable (like in a PL/SQL block).
But it appears that implicit temporary lobs are created (with session duration) when you use a sql function returning a LOB in a select statement. For example, each time I execute the following statement in SQL Navigator or from a script, CACHELOBS increases by 1 till the session ends:select substr(toclob(lpad('X',2000,'X')), 1, 100) from dual;The same thing happens using decompress-on-the-fly view.The goal here is to have the client script use a simple select statement on the view, retrieving the result as an ADODB Recordset. It doesn't appear as if I can call dbmslob.freetemporary on the recordset Fields.
Is some way to use it in a view definition? Hi Tom,This post has been very useful. In our scenario, we have this table which is used for helpdesk, bugs reporting, change request etc.
In this we have a BLOB column which will store all kind of attachments, right from zips to office documents to snapshots. Day by day the activities are increasing and this lobsegment has grown very big from 2 gb to 40 gb (about couple of months ago) is now 73 gb. This is becoming unmanageable now due and we are running short of disk space. We are on 9iR2 currently. Given the scenerio, what do you think is the best approach towards achieving max compression? Is Piotr's only solution for us? Also for each compress/decompress, it will consume that much CPU?Thanks in advance.
Hi Tom,I'm learning a lot from this thread, and tried to implement it myself but received an error in the call to UTLCOMPRESS.LZCOMPRESS stating 'PLS-00306: wrong number or types of arguments in call to lzcompress'. CREATE TABLE AUDIOTBL( 'AUDIOID' RAW(32),'AUDIO' 'ORDSYS'.' FollowupSeptember 04, 2007 - 2:16 pm UTCwell, the real problem will be.that you are not storing audio anymore, you are storing a compressed file - there would be NO purpose in using this datatype anymore since you are just storing 'stuff', not an audio file.So, you would just use a blob, upon retrieval and decompression - you would be able to use it in a local variable of the audio type.Also, bear in mind, most audio is already compressed, compressing compressed stuff doesn't typically result in anything smaller - in fact, sometimes larger.
Blob Field With Compression Data Is Not Valid Code
Hi 'A Reader'In your example you changed the name of the ORDSYS.ORDAudio variable from 'obj' in the documentation example to 'vsrcaudio' but you didn't change this line:obj.getContentInLob(ctx,vcompressedaudio);needs to bevsrcaudio.getContentInLob(ctx,vcompressedaudio);Thats why you are getting the error8/9 PLS-00302: component 'GETCONTENTINLOB' must be declaredAnd then your next problem will be not declaring the two OUT variables required to hold the mime type and format.HTHChris.