Monday, November 8, 2010

Refresh Report Region and Pagination

When using the refresh action to refresh reports in Oracle APEX 4.0, you inevitably will encounter the problem of lost pagination. The refresh action does really refresh, regardless of which page of the report the user was currently watching.

It Just Won't Tell Me
APEX knows the current pagination of a report. If you go to another page and back, or just press F5, you will see that APEX knows which page of your report you were on. Where can I get that user data? I have looked at the documented API's, and the closest thing I can see, is the apex_application.g_flow_current_min_row. The problem is that this is only available at render time and maps to pg_min_row request value. Otherwise, I only get the value 1 (first row).

It knows, but won't tell me!

The Dirty Approach
If there is a will, there is a way. I looked into the APEX tables, and especially wwv_flow_data. If you look, you will see a record containing the user session instance and report region id. Item value will be a colon separated value starting with the current min row, followed by current max rows (probably). That looks useful, but you need to keep your head about this. The value column is a CLOB data type, and a pretty special format, so if you attempt to use that data, be prepared to do a rewrite next time you patch your APEX installation.

If you notice in your pagination scheme, the pagination link executes a javascript called $a_report_Split (basically a wrapper for old (but rewritten) $a_report). So if I could use wwv_flow_data to get the second parameter of $a_report_Split right (<pMin>_<pMax>_<pFetched>), then I would be good to go..? For a while, anyway. $a_report_Split is not a documented API javascript function, and may also be subject to change in the upcoming releases

The Dirty Solution
Access to wwv_flow_data directly is not something you want to do in a normal APEX installation. Creating a very single minded function in a privileged schema (granted select on wwv_flow_data) and granting access to the function to my application schema was the best I could come up with.
create or replace function <privileged schema user>.get_current_pagination (    
         p_region_id    in       number
      ) return varchar2 is

         l_ret          varchar2(255);
         l_session_id   number := v('SESSION'); -- setting directly prevents unauthorized use
         l_value_tab    apex_application_global.vc_arr2;

         cursor c_region (
            b_session_id      in       number
         ,  b_region_id       in       number
         ) is 
            select item_value
              from apex_040000.wwv_flow_data
            where  flow_instance = b_session_id
              and  item_id = b_region_id;
      begin
         for r in c_region(l_session_id,p_region_id)
         loop
            l_value_tab := apex_util.string_to_table(r.item_value,':');
         end loop;

         if l_value_tab.exists(1)
         then
            l_ret := l_value_tab(1) || '_' || l_value_tab(2) || '_' || l_value_tab(2);
         end if;
         
         return l_ret;
      end;                            
      /

      grant execute on <privileged schema user>.get_current_pagination to <application schema user>;
That taken care of, the next thing is to get that value and use it in a call to $a_report_Split.

Follow these steps in your APEX builder:
  • Create two hidden page items called P1_REPORT_REGION_ID and P1_REPORT_SPLIT_PARAMS
  • Create a dynamic action to be called when you would like your report to be refreshed
  • Create three true actions:
  • First action to set P_REPORT_REGION_ID with the id of the report region:
    Action: SetValue, Type: Javascript Expression, Javascript Expression: $('#<report region static id>').closest('div').attr('id').replace(/[^0-9]/g,'').
    (If you set this item value as a computation before header in APEX, you can skip this action)

  • Second action is to set P1_REPORT_SPLIT_PARAMS with values corresponding to the current report page:
    Action: SetValue, Type: PL/SQL Function Body, PL/SQL Function Body: return <privileged schema user>.get_current_pagination(:p1_report_region_id);

  • Third action is to call the $a_report_Split javascript function:
    Action: Execute Javascript Code, Code: $a_report_Split($v('P1_REPORT_REGION_ID'),$v('P1_REPORT_SPLIT_PARAMS'));


    That should be it.

    Naturally, I have cannot create a demo page on apex.oracle.com for this, as access to wwv_flow_data is somewhat scarce in that environment (and rightfully so).

    Why Didn't He Just...
    If you know of a better way to do this (and better than refreshing the whole page :-)), then please let me know! And if it is possible through documented API's, that would be grand.

    If this is the closest I get, and any of the APEX team is reading this, then please consider this an enhancement request :-P

    For future reference: Oracle Application Express 4.0.1.00.03

    Friday, November 5, 2010

    Paranoia 101 - or protecting your investment

    In corporate application development, a versioning and backup strategy is pretty mandatory. But what about all those small personal projects? The only thing you can be really certain of when using a laptop/desktop for your pet projects, is that eventually your machine is going to die. How much work gets lost when it does?

    I use SVN for version control as a rule. I also use the APEX export and splitter utility to capture my local Oracle APEX applications into subversion repositories. Sometimes I even remember to put some of my more important repositories on Amazon S3 for safe storage. The APEX export is (very) loosely based on a blog post by John Scott back in 2006.

    A word of cation: This just my naive approach, use it at your own risk! Be sure to test whether a restore actually works, equally important as the actual backing up :-)

    The Start of an "Automated Approach"
    While the manual steps described above will get you there, it is also a bit more work than I appreciate. In this post I will describe a more automated approach to ease your local APEX application backups (and anything else you put in your svn repository as well). A bit of pain to set it up, but once there, it is all plain sailing.

    I do my development in Windows mainly, so all OS specifics described below are set in that context. This approach is viable for your favorite nix OS also. A tweaking of the batch scripts should do it.

    Subversion
    You need a command line tool to manipulate subversion repositories, download Subversion from http://subversion.apache.org/packages.html. I currently use the one from CollabNet. It should not matter much which one you choose. There are a number of online resources on how to install svn and create repositories (yes, even for Windows...).

    Include the subversion bin directory to your path environment variable. After installation you should be able to execute svn commands from a cmd window:
    svn help
    svnadmin help

    APEX Export Utility
    This comes with the APEX installation files, and has the ability to extract single APEX applications (or whole instances) from a command line. An APEX splitter tool is also included, which together with a subversion repository enables you to track all the tiny changes in your applications.

    In Oracle APEX 4.0, it accepts more modern Oracle jdbc libraries, in APEX 3.2 you had to locate classes12.jar (predates even the jdbc libraries that comes ships with Oracle XE). You can download all jdbc libraries here: http://www.oracle.com/technetwork/database/features/jdbc/index-091264.html

    Since technetwork is constantly shifting about: If the link does not work, try to locate the Software Download Index, and search for JDBC.

    Amazon S3
    You need to set up an Amazon account before you can use Amazon Simple Storage Service (S3). If you have ever ordered books from Amazon, you probably already have an account, but it may be necessary to confirm payment methods.

    Storage prices are not that high (at the moment). My reasoning is that I will suffer more if I don't backup my stuff to safe storage, than a tiny monthly bill could ever make me.

    In order for this to work, you have to obtain an Acceskey:
    • Log on to http://aws.amazon.com/account/
    • Go to Security Credentials
    • Go to Access Credentials
    • If you do not have any previously generated accesskeys, then click Create a new Access Key
    • Copy both Access Key ID, and Secret Access Key

    Beware to keep this information to yourself.

    Create a bucket where you would like your backups to be stored, and take a note of the name.

    The thing with Amazon S3, is its interfaces (REST style HTTP or SOAP) and storage objects organized in buckets. This basically means that you will require a third party tool (or do some serious programming yourself) to easily use the service.

    JetS3t - Synchronize
    I settled for JetS3t. I will not vouch for this tool in any way. I do not know who develops it, I do not know if it sends my credentials to other people or companies. I basically just hope and pray.

    When credentials potentially can be exploited in a very direct manner to get at your most precious, and even conducting illegal activities on your behalf, it is time to draw a deep breath. Maybe uploading your file manually wasn't such a bad idea anyway? As I said, I just hope and prey. I hope "someone" will scan the source code, and "someone" will scream if anomalies appears. I hope.

    I use a part of JetS3t called Synchronize. It has a very easy cli that suites my need for an automated backup process. Installation is just unzipping, but you have to tweak the properties a bit for it to work.

    These are the changes I made in jets3t\configs\jets3t.properties-file:
    s3service.default-bucket-location=EU (well, bacause it's the easiest to me)

    And these are the changes in jets3t\configs\synchronize.properties
    accesskey=<your accesskey>
    secretkey=<your secret accesskey>

    You can also point Synchronize to an encrypted file containing your access keys using the --credentials switch.

    7-Zip
    To zip my svn repository dumps, I use 7-Zip. Very easy, straight forward open source zipping tool with a command line interface. Make sure you can reach 7z from a command prompt before proceeding, else you have to modify the batch scripts below and add the path.

    Java
    Make sure you have java 1.5 or higher installed in you system, and that you can write java -version from a command prompt.

    Batch scripts
    This is where it gets interesting.

    I start off by creating a directory where my backup scripts will reside. It will also serve as a staging area before files are uploaded to Amazon S3. This is the current structure I have:
    • backup
      • Apex (I need this directory for temporary check out/in)
      • oracle (this is where I copy the export/splitter utilities from <apex install dir>/apex/utilities/oracle)
        • apex
          • APEXExport.class
          • APEXExportSplitter.class
      • classes12.jar (jdbc drivers)

    I then place four backup scripts directly into the backup directory.

    backup_apex_app.bat
    This script is the one responsible for checking out the APEX files from svn, extracting the current version of the APEX applications from the database, and checking it into svn again. It takes a parameter (%1%) containing APEX application id. All other necessary parameters will be set in the master.bat below.
    REM ******** Start Backup APEX apps ************
    set APEX_APP_ID=%1%
    
    REM ** svn checkout
    svn checkout file:///%APEX_REPOS_PATH% %APEX_SPLITTER_DIR%
    
    REM ** Export APEX applications
    set CLASSPATH=%CLASSPATH%;%APEX_BACKUP_DIR%\classes12.jar;
    java oracle.apex.APEXExport -db %DB_JDBC_CONN_STR% -user %DB_USER% -password %DB_USER_PWD% -applicationid %APEX_APP_ID%
    java oracle.apex.APEXExportSplitter f%APEX_APP_ID%.sql
    
    REM ** Copy files to svn directory
    xcopy %APEX_BACKUP_DIR%\f%APEX_APP_ID%.sql %APEX_SPLITTER_DIR% /Y
    xcopy %APEX_BACKUP_DIR%\f%APEX_APP_ID% %APEX_SPLITTER_DIR%\f%APEX_APP_ID% /S /E /Y
    
    REM ** Remove superflous files
    rmdir %APEX_BACKUP_DIR%\f%APEX_APP_ID%\ /s /q
    del %APEX_BACKUP_DIR%\f%APEX_APP_ID%.sql
    
    REM ** Add and check in files to repository
    cd %APEX_SPLITTER_DIR%
    svn add * --force
    svn ci -m "Automated backup"
    cd %APEX_BACKUP_DIR%
    

    backup_svn_repos.bat
    This is the script responsible for dumping the svn repository and zipping the dump file. I suppose this could have been done with HOTCOPY as well. Old habits and such... You can include filters here to extract only the project you are interested in. All necessary parameters will be set from the master.bat script below.
    REM ******** Start dumping and zipping repository ************
    
    REM ** Remving old dumpfile
    del %APEX_BACKUP_DIR%\%APEX_REPOS_DUMPFILE%.zip
    
    REM ** Dumping svn repository
    svnadmin dump %APEX_REPOS% > %APEX_BACKUP_DIR%\%APEX_REPOS_DUMPFILE%.dump
    
    REM ** zipping repository dump
    7z.exe a %APEX_REPOS_DUMPFILE%.zip %APEX_REPOS_DUMPFILE%.dump
    
    REM ** Removing old dump file
    del %APEX_BACKUP_DIR%\%APEX_REPOS_DUMPFILE%.dump
    

    backup_svn_to_s3.bat
    This script is responsible for sending the zipped repository dump to Amazon S3.
    REM ******** Backup SVN repository to S3 ************
    
    REM ** Synchronizing svn repository dumpfile
    %JETS3T_PATH%\bin\synchronize.bat UP %BUCKET_NAME%/%BUCKET_PATH% %APEX_BACKUP_DIR%\%APEX_REPOS_DUMPFILE%.zip
    

    master.bat
    This is the script which ties it all together. If you want to test the separate scripts, just comment out the scripts (rem) you don't want to call. Chances are there is going to be some issues with the parameters the first run, so testing in small stages is key when debugging (it produces quite a bit of output during a normal run).

    In this script it is necessary for you to adjust all parameters to your own environment. Example values are included in the sample below.

    rem *********** Setting environment parameters *******************
    
    rem ** backup directories
    set APEX_BACKUP_DIR=<your backup dir, like c:\development\backup>
    set APEX_SPLITTER_DIR=%APEX_BACKUP_DIR%\apex
    
    rem ** path to apex application directory inside svn repository
    set APEX_REPOS=<the actual path of your svn repository, like c:\SVN\myapprepos>
    set APEX_REPOS_PATH=<Full path to your APEX folder inside your svn repository, note the forward slashes, like c:/SVN/myapprepos/trunc/apex>
    set APEX_REPOS_DUMPFILE=<name of your repository dump file, like svn_repos, without post-fix/file type>
    
    rem ** connect string format host:port:sid
    set DB_JDBC_CONN_STR=<jdbc connect string, like localhost:1521:orcl3>
    set DB_USER=<username, like scott>
    set DB_USER_PWD=<password, like tiger>
    
    rem ** S3
    set JETS3T_PATH=<full path to your jets3t unzip dir, like C:\development\backup\jets3t>
    set BUCKET_NAME=<name of the Amazon S3 bucket you created earlier, like mysafebackup.backup.svn>
    set BUCKET_PATH=<name of path inside bucket, like svn_repos>
    
    rem ************ Calling batch files ************************
    call backup_apex_app.bat <application id>
    call backup_apex_app.bat <other application id>
    call backup_svn_repos.bat
    call backup_svn_to_s3.bat
    
    pause
    

    Good to Go!?!
    The only thing remaining to fully automated backups, is to schedule a regular execution of the script through the scheduler.

    I know I felt a whole deal better after I started using this backup strategy :-)