Tuesday, February 25, 2014

Hat Trick Tuesday - 3 new releases

  There's 3 new releases today from the database tools teams.  For non-hockey folks, a hat trick is when you get 3 goals in 1 game.  In our world of software, 3 releases at once is the closest thing ( I think ).

First up is SQL Developer, with over 3.5m users now it is by far the database IDE for Oracle databases.

SQL Developer 4.0.1

  The Oracle IDE for the database has come a long ways from when we started.  Everyone should upgrade to this latest patch as there are lots of fixes in there, the full list is here.  The tool continues to grow from where we started with over 3.5m users now.  Looking over the documentation from previous release really shows how much it has grown.

SQL Developer 1.1 Worksheet

SQL Developer Datamodeler 4.0.1

   The same applies to the modeler, there are a bunch of fixes in here.  However, there are also a bunch of new features included in this patch. Such as the mouse wheel now works to zoom in/ out and a dialog to make it easier to add/remove objects from subviews.  The other great thing about the modeler is it's already inside SQL Developer.

Oracle REST Data Services 2.0.6 ( formerly Oracle APEX Listener )

    The biggest change is the Oracle Apex Listener is now named Oracle REST Data Services.  

   What does this name change mean to you?

  • Forum Name changed
  • OTN URLs, Links changed
  • The file name changed apex.war is now ords.war
  • Default URL is now /ords

   The biggest reason for this change is to put an emphasis on where the majority of development work is going.  There's a lot of new features coming for REST to easier enable your relational, non-relational, and nosql data in the next release.

   The new name has no impact on the support of Application Express.  The support for OWA toolkit which Application Express relies on will continue to be enhanced.

   We will be building a Statement of Direction soon to let everyone know where this is headed.  In the mean time, there's a lot of example on this blog and on Colm's blog

Wednesday, November 06, 2013

APEX Listener supported App Servers

     With the latest news on Glassfish, I thought it may be a good time to review the options for the APEX Listener to deploy.  The huge caveat is this is as of today, 11/6/2013 , the future can change anything however there’s nothing planned.

The Licenses

I'm just putting the important parts here for reference.  They are linked to the entire license.

 The APEX Listener is only available via OTN.  When that is downloaded, this is the license  everyone has to agree to:

“We grant you a nonexclusive, nontransferable limited license to use the programs solely for your internal business operations subject to the terms of this agreement and the license grants and restrictions set forth with your licensing of such Oracle database programs. We also grant you a nonexclusive, nontransferable limited license to use the programs for purposes of developing your applications. This agreement does not grant any rights or licenses to Oracle database programs.”

  • A restricted-use license for Oracle Application Server Containers for J2EE (OC4J) is included with all editions (except for Oracle Database Express Edition). This embedded version is provided solely to support Oracle Enterprise Manager (Database and Grid Control), Advanced Queuing Servlet, Ultra Search, Application Express, and Warehouse Builder, and may not be used or deployed for other purposes.

  • A restricted-use license for Oracle Application Express Listener is included with all editions (except for Oracle Database Express Edition) solely to support connectivity to Oracle Application Express, which may be installed in the Oracle Database. Running Oracle Application Express Listener on a different server does not require that that server be licensed to use the Oracle Database.

Supported Application Servers

The APEX Listener’s documentation lists the supported webservers for deployment.

Supported Java EE Application Servers
Oracle Application Express Listener supports the following Java EE application servers: 
Application Server
Supported Release
Oracle WebLogic Server
11g Release 1 (10.3.3) or later
Oracle GlassFish Server
Release 3 or later

Clarifications on the Application Servers

I think some of the confusion is the wording of “Oracle Glassfish Server.” This was not intended to limit support to the commercial version of Glassfish.  We support the APEX Listener no matter which Glassfish you are using.  The support for the APEX Listener is tied directly to the database support agreement.  To state the obvious, the APEX Listener team is not supporting any application server but our code base running inside that application server.  If an issue is in the webserver itself arises, the customer will have to track that issue down with the webserver.

This table of supported Application Server will soon be updated to the following.

Application Server
Supported Release
Oracle WebLogic Server
11g Release 1 (10.3.3) or later
GlassFish Server
Release 3 or later
Apache Tomcat
Release 6 or later

Tuesday, October 01, 2013

How to use RESTful to avoid DB Links with ā'pěks

So the question came up of avoiding a db link by using the APEX Listener's RESTful services to get at the same data.  This is all in the context of an Apex application so apex_collections is the obvious place to stuff transient data that could be used over a session.

Step 1:  Make the RESTful Service.
The only catch is to turn pagination off ( make it zero ).  I didn't need it for now so this code that follows doesn't account for it.

Step 2. Install PL/JSON 

  In the future this can be done with built in support for JSON in the database.

Step 3.  Glue code.

  g_collection_name varchar2(32767) := 'json_data';
-- JSON Portion
  l_url  varchar2(32767) := '';
  l_jsonObj json;
  l_jsonObj2 json;
  l_jsonKeys json_list;
  l_key varchar2(32767);
  l_value varchar2(32767);
  l_tempdata json_value;
  l_temparray json_list;
  i number;  
  ii number;  
 values_array dbms_sql.varchar2_table;

  procedure varray2collection(p_values dbms_sql.varchar2_table) as
    l_sql varchar2(32767) := 'begin APEX_COLLECTION.ADD_MEMBER (p_collection_name =>:NAME,';  
    i number;
    c number;
    ret number;
  -- build up the begin..end sql block
    for i in 1..p_values.count loop
      l_sql := l_sql || 'p_c00'|| i ||'=>:c'||i||',';
    end loop;
    -- chop off the last comma
    l_sql := substr(l_sql,1,length(l_sql)-1);
    -- add the end
    l_sql := l_sql || '); end;';
    -- bind in the collection name

    -- bind in the values
    for i in 1..p_values.count loop
    end loop;
    ret := dbms_sql.execute(c);
    exception when others then
  -- simple function to remove the leading and trailing "
  function trimQuotes(s varchar2) return varchar2 is
    l_value varchar2(32767);
    l_value := substr(s,2);
    l_value := substr(l_value,1,length(l_value)-1);
    return l_value;
  -- function to grab the URL and return a JSON object
  function getURL(p_url varchar2) return JSON is
       -- HTTP Portion
      l_http_request   UTL_HTTP.req;
      l_http_response  UTL_HTTP.resp;
      l_clob           CLOB;
      l_text           VARCHAR2(32767);
      -- Initialize the CLOB.
      DBMS_LOB.createtemporary(l_clob, FALSE);
      -- Make a HTTP request and get the response.
      l_http_request  := UTL_HTTP.begin_request(l_url);
      l_http_response := UTL_HTTP.get_response(l_http_request);
      -- Copy the response into the CLOB.
             UTL_HTTP.read_text(l_http_response, l_text, 32766);
             DBMS_LOB.writeappend (l_clob, LENGTH(l_text), l_text);
      END LOOP;

    -- Relase the resources associated with the temporary LOB.
    dbms_output.put_line('Length:'|| DBMS_LOB.getlength(l_clob));
  return json(l_clob);
  end geturl;
-- apex collection

    l_jsonObj := getURL(l_url);
    -- grab the items 
    l_temparray:= json_list(l_jsonObj.get('items'));
--    temparray := json_list(tempdata);
    dbms_output.put_line( 'Count:'|| l_temparray.count);
    for i in 1..l_temparray.count loop
         -- get the Nth item
          l_tempdata :=    l_temparray.get(i);
          dbms_output.put_line( 'Record['|| i||']:'|| l_tempdata.to_char);
          -- bring the Nth item into it's own json object
         l_jsonObj2 := json(l_tempdata);
         -- get all the keys to walk
         l_jsonKeys := l_jsonobj2.get_keys();
                   for ii  in 1..l_jsonKeys.count  loop
                      l_key := trimQuotes(l_jsonKeys.get(ii).to_char);
                      l_value :=  trimQuotes(l_jsonObj2.get(l_key).to_char);
                      dbms_output.put_line(l_key || '=' ||l_value);
                      -- build up the array to pass to the apex collection
                      values_array(ii) := l_value;
                   end loop;
                   -- send it to the collection
        end loop;

Step 4. Make a page in Application Express

  The glue code is in the process named Get JSON.
  The interactive report is simply: select * from apex_collections.

Step 5. It's alive

Oh the ā'pěks is from the oraclenerd store.  If you don't have a shirt already, you should.

Friday, September 27, 2013

RESTful Cursor support for JSON

Just a real quick blog before I forget.  In the latest APEX Listener 2.0.4 patch, there's support for nested cursors.  There is two gotchas.  First make sure to to disable ( make 0 ) the pagination of the REST definition.  The second is this only works at the top level, so not nested nests of nests.

This is a very quick example of tables and nested in each table the columns and indexes that tables has.

select t.table_name,
  cursor(select column_name
             from user_tab_cols tc 
             where tc.table_name = t.table_name) cols ,

  cursor(select index_name
             from user_indexes i 
             where i.table_name = t.table_name) indexes
 from user_tables t

The resulting JSON looks like this.

Tuesday, August 20, 2013

Chunked File loading with APEX Listener + HTML5

  I just found the HTML5 File API the other day so I had to see what I could do with the APEX Listener's RESTful services. There's a bunch of blogs on what can be done such as on HTML5rocks  .

  The end result is that the new File api let's javascript get details of the file and slice it up into parts. Then I made a pretty simple REST end point to receive the chunks and put them back together again.

The actual sending part of the javascript is here
function sendChunk(chunkNumber){
        var reader = new FileReader();
        var start = chunkSize * (chunkNumber-1);
        var end = start + chunkSize -1;
      // create the slice of the file
        var fileContent = selectedFile.slice(start, end);   
      // grab the length
        var length = fileContent.size;
      // read the slice of the file
        url: uri,
        type: "POST",
        data: fileContent,
        processData: false,
        beforeSend: function(xhr) {
          // pass in the chunk size,offset,name 
          // as headers
                        xhr.setRequestHeader('x-chunknumber', chunkNumber);
                        xhr.setRequestHeader('x-offset', start );
                        xhr.setRequestHeader('x-chunksize', length );
                        xhr.setRequestHeader('x-content-type', selectedFile.type );

        success: function (data, status) {
          bytesUploaded += length;
          // set the percent complete 
          var percentComplete = ((bytesUploaded / selectedFile.size) * 100).toFixed(2);          
          $("#fileUploadProgress").text(percentComplete + " %");
          // make a link to the REST that can deliver the file
          $("#downloadLink").html("New File");
          // if there's more chunks send them over
          if ( chunkNumber < chunks ) {
        error: function(xhr, desc, err) {

The next step is to make the HTTP Headers into bind variable so the plsql block will be able to use them.

  p_b         blob;
  p_body      blob;
  p_offset    number;
  p_filename  varchar2(4000);
  p_raw       long raw;
  p_chunksize varchar2(200);
  p_status    varchar2(200);
  -- pull the binds into locals
  p_offset    := :OFFSET + 1;
  p_body      := :body;
  p_filename  := :filename;
  p_chunksize := :chunksize;

  -- If there is a file already with this name nuke it since this is chunk number one.
  if ( :chunkNumber = 1 ) then
    p_status := 'DELETING';
     delete from chunked_upload
      where filename = p_filename;
  end if;

  -- grab the blob storing the first chunks
  select blob_data
         into p_b
    from chunked_upload
   where filename = p_filename
    for update of blob_data;

  p_status :=' WRITING';

  -- append it
  dbms_lob.append(p_b, p_body);

  -- if no blob found above do the first insert
  when no_data_found then
     p_status :=' INSERTING'; 
     insert into CHUNKED_UPLOAD(filename,blob_data,offset,content_type)
      values ( p_filename,p_body,p_offset,:contenttype);  
  when others then
      -- when something blows out print the error message to the client

A very simple html page for testing it all out.

Here's a quick video of how it all works.

Here's the complete Javascript/html for this sample.

JS Bin