Tuesday, April 29, 2008

WebLink: Sending Pro/ENGINEER Point Data to Excel

Extracting point data from your Pro/ENGINEER models can be a cumbersome task, especially if you follow the suggested PTC export/import process. This can be streamlined significantly using Pro/WebLink, but also with J-Link and Pro/Toolkit.

I've discussed, in previous articles, obtaining lists of features and recursing through assemblies. What's different here is that a transformation matrix will be used to obtain the XYZ position of the point with respect to the default coordinate system of the assembly. This is the position data that will be output to Excel.


The HTML page has a single button, which initiates the point data extraction, and a single div field for some results. Library files pfcUtils.js and pnts2excel.js contain Javascript code. pfcUtils.js is a utility library provided by PTC. The code discuss in this article would be contained in the pnts2excel.js file.

<HTML>
<SCRIPT LANGUAGE="JavaScript" type=text/javascript src="pfcUtils.js"></SCRIPT>
<SCRIPT LANGUAGE="JavaScript" type=text/javascript src="pnts2excel.js"></SCRIPT>
<BODY>
<form name="f">
<INPUT name="a" type=button value="Get Point Data!" onclick="GetPoints()">
<br><div id="mesg"></div><br>
</form>
</BODY>
</HTML>

 

After the GetPoints() function has obtained the session object and verified that a model is active, it sets up an object used for some persistent data and for returning an array of points.

The object has the following properties: modelsFound (array), points (array), root (top-level model object), comppath_seq (intseq object) and transform (assembly tranformation matrix). GetPointData() is called using the current model and appdata object to obtain the point data. Then PntArrayToExcel() is used to send the data to Excel.

function GetPoints () {

if (!pfcIsWindows())
netscape.security.PrivilegeManager.enablePrivilege("UniversalXPConnect");

var elem = document.getElementById("mesg");

var modelname = "no model";
var session = null;
var model = null;

try {
session = pfcGetProESession();
}
catch (e) {
elem.innerHTML = "ERROR: Cannot connect to Pro/Engineer session!";
return;
}

try {
model = session.CurrentModel;
}
catch (e) {
// probably no model
elem.innerHTML = "Problem getting current model info.";
return;
}

elem.innerHTML = "<br>" + "Top Level Model: " + model.FileName;

// Create appdata object
//
var appdata = new Object();
appdata.modelsFound = new Array();
appdata.points = new Array();
appdata.root = model;
appdata.comppath_seq = new pfcCreate("intseq"); // need 'new', this is an instance object
appdata.transform = null;

GetPointData(model, appdata);
PntArrayToExcel(appdata.points);
}

 

There are two main actions in the GetPointData() function: point extraction, and recursing (for subassemblies). The modelsFound array is used in both actions, and helps to avoid extracting data from a model more than once. In the assignment statement, it is flagging the currently encountered model as having been processed, so that it will not get processed again.

After that, a sequence of points is obtained using the current model and the ListItem() method specifying ITEM_POINT. This will return all points in the model, not just a single feature, which is important if point array features are present in the model. Using the 'Point' property of the point, we can get a Point3D object which has the XYZ info. If there is a transform matrix available, it is applied to the point data first. The model name and XYZ data of the point are assigned to an object, which is then push into the 'points' array property of the appdata object.

In the recursing action, if the encountered model is a subassembly, the code iterates over the component features, if any. If the component is active, the feature id is appended to the comppath_seq sequence, which is used to get the model object from the feature object and the transformation matrix from the root assembly's default coordinate system. The matrix is saved into the appdata object.

If the component has not been encountered already, GetPointData() is called recursively with the component model info. After the function returns, the last element of the comppath_seq is removed.

function GetPointData ( model, appdata ) {

var elem = document.getElementById("mesg");
appdata.modelsFound[model.FileName] = 1;


// Get points in current model
//
var points = model.ListItems( pfcCreate("pfcModelItemType").ITEM_POINT );

for (var i = 0; i < points.Count; i++) {
var point = points.Item(i);
var pnt3d = null;

if (appdata.transform == null) {
pnt3d = point.Point;
}
else {
pnt3d = appdata.transform.TransformPoint(point.Point);
}

// send pnt data to the browser
elem.innerHTML += "<br> " + model.FileName + ": "
+ point.GetName() + " (Id " + point.Id + ")"
+ ", XYZ= ( "
+ pnt3d.Item(0) + ", "
+ pnt3d.Item(1) + ", "
+ pnt3d.Item(2) + " )"
;

var object = new Object();
object.Owner = model;
object.Point = pnt3d;

appdata.points.push(object);
}


// Recurse into components, if model is an assembly
//
if ( model.Type == pfcCreate("pfcModelType").MDL_ASSEMBLY ) {

var components = model.ListFeaturesByType( false, pfcCreate("pfcFeatureType").FEATTYPE_COMPONENT );

for (var i = 0; i < components.Count; i++) {

var compFeat = components.Item(i);

if (compFeat.Status != pfcCreate("pfcFeatureStatus").FEAT_ACTIVE) {
continue;
}

// Append id for use in building comppath
appdata.comppath_seq.Append(compFeat.Id);

try {
// Create ComponentPath object to get pfcModel object of component and transform
var cp = pfcCreate("MpfcAssembly").CreateComponentPath( appdata.root, appdata.comppath_seq );
var compMdl = cp.Leaf;
appdata.transform = cp.GetTransform(true);
} catch (e) {
elem.innerHTML += "<br> CreateComponentPath() exception: " + pfcGetExceptionType(e);
}

// Descend into subassembly
if ( !(compMdl.FileName in appdata.modelsFound) ) {
GetPointData(compMdl, appdata);
}

// Remove id (last in seq), not needed anymore
try {
appdata.comppath_seq.Remove( (appdata.comppath_seq.Count-1), (appdata.comppath_seq.Count) );
} catch (e) {
elem.innerHTML += "<br> comppath_seq.Remove exception: " + pfcGetExceptionType(e);
}

} // Loop: components

} // model.Type
}

 

The PntArrayToExcel() function sends the data to Excel. The code first tries to use an existing Excel session, but will start a new one if necessary. Certain IE security settings may result in a new session being started every time.

Once an Excel session is available and a new workbook has been created, the code iterates over the 'points' array property of the appdata object to write the data into the active sheet. Four columns are used in the output, which include the model name, X position, Y position, and Z position. Since a particular coordinate system was not referenced, the default coordinate system of the top-level assembly is used.

function PntArrayToExcel ( array ) {

var oXL;
var elem = document.getElementById("mesg");

// Get/Create Excel Object Reference
try {
oXL = GetObject("","Excel.Application"); // Use current Excel session
}
catch (e) {
// couldn't get an excel session, try starting a new one
try {
oXL = new ActiveXObject("Excel.Application"); // Open new Excel session
}
catch (e) {
// couldn't start a new excel session either
}
}

if (oXL == null) {
elem.innerHTML = "Could not get or start Excel session!";
return;
}

try {
oXL.Visible = true;
var oWB = oXL.Workbooks.Add();
var oSheet = oWB.ActiveSheet;
}
catch (e) {
elem.innerHTML = "Problem creating new workbook.";
return;
}

for (var i=0; i < array.length; i++ ) {
var pnt3d = array[i].Point;
var ownerMdl = array[i].Owner;

oSheet.Cells(i+1, 1).Value = ownerMdl.FileName;
oSheet.Cells(i+1, 2).Value = "" + pnt3d.Item(0);
oSheet.Cells(i+1, 3).Value = "" + pnt3d.Item(1);
oSheet.Cells(i+1, 4).Value = "" + pnt3d.Item(2);
}
}

 

Other than the transformation matrix, the code is pretty straightforward and easily adaptable to other data sets (i.e. parameters, layers, feature lists, etc).


Questions and comments are always welcome, either here on my blog or at MarcMettes@InversionConsulting.com.

Monday, April 28, 2008

J-Link: Compile Your Pro/ENGINEER Java Application with Intralink!

If you've wanted to try some Pro/ENGINEER programming using J-Link (the free Java API), but didn't know how to compile your program, then this article is for you. Forget Netbeans, forget Eclipse, and forget downloading and installing the JDK. Just use Intralink's Java Runtime Environment to compile your apps.

Seriously though, those IDE's are really great programs, but sometimes you can't install them, or they're not available on your system, or you just need something minimal and fast. Fortunately, Intralink contains everything we need to compile J-Link applications: a JRE and tools.jar. tools.jar is the codebase from Javasoft that contains Java compilation code.

Here is a batch file that will compile J-Link Java code on Windows:

@echo off
rem jlink_javac.bat - Compile JLink Apps using Intralink JDK

set Proe_Install_Dir=c:\ptc\proeWildfire2.0_m200
set IL_Install_Dir=c:\ptc\proiclient3.3_m021
set CLASSPATH=%IL_Install_Dir%\i486_nt\lib\tools.jar
set path=%IL_Install_Dir%\i486_nt\jre\bin;%path%

java sun.tools.javac.Main -classpath %Proe_Install_Dir%\text\java\pfc.jar -d . %*
 
Run it like this:

jlink_javac.bat Abc.java Def.java Ghi.java
 

Here is a C shell script that will compile J-Link Java code on Unix:

#!/bin/csh
# jlink_javac.csh - Compile JLink Apps using Intralink JDK

setenv Proe_Install_Dir /opt/proeWildfire2.0_m200
setenv IL_Install_Dir /opt/proiclient3.3_m021
setenv CLASSPATH $IL_Install_Dir/sun4_solaris/lib/tools.jar
set path=( $IL_Install_Dir/sun4_solaris/jre/bin $path )

java sun.tools.javac.Main -classpath $Proe_Install_Dir/text/java/pfc.jar -d . $*
 
Run it like this:

jlink_javac.csh Abc.java Def.java Ghi.java
 

The resulting class files will be in the current directory. While I do use this for my own J-Link code which is typically targeted at Java 1.4.2, it may not work with newer Java versions, nor take advantage of their full capabilities. However, for you command line junkies out there, this is just the recipe.

Saturday, April 26, 2008

Pro/WebLink: Sending Your Pro/ENGINEER Assembly BOM to Excel

One question I read frequently on the forums is about how to get BOM data of a Pro/ENGINEER assembly into Excel. Typically the solutions involve saving files to disk, then some editing, and finally reading that data into Excel.

This example will demonstrate how to skip these extra steps and, using Pro/WebLink, send your BOM directly from Pro/ENGINEER into Excel.

The HTML Page

The starting point is this very simply HTML page. At the beginning, it pulls in two JavaScript libraries, pfcUtils.js and bom2excel.js. As mentioned in my previous Pro/WebLink article, pfcUtils.js is a small PTC provided library. bom2excel.js will contain the remaining JavaScript code mentioned in this article.

The HTML page also contains two buttons and two div fields. The two div fields are "buckets" used for output and status messages and will contain HTML code added programmatically. One button initiates the action and the other clears the div fields.

<HTML>
<SCRIPT LANGUAGE="JavaScript" type=text/javascript src="pfcUtils.js"></SCRIPT>
<SCRIPT LANGUAGE="JavaScript" type=text/javascript src="bom2excel.js"></SCRIPT>
<BODY>

<form name="f">

<br><INPUT id="get_btn" type=button value="Get BOM" onclick="GetData()">
<INPUT id="clr_btn" type=button value="Clear" onclick="Clear()">
<br><div id="data"></div><br>
<br><div id="status"></div><br>

</form>

</BODY>
</HTML>


 

The Initialization Function

The GetData() function initializes the data structures, gets the BOM data using the recursive GetBOMData() function, and sends the data to Excel or the browser using the SendData() function.

Once we're sure that we're connected to a Pro/ENGINEER session properly and a model is active, the function sets up an object that will be used by the recursive GetBOMData() function. The properties of this object are "params", "comppath_seq" and "root".

The params property lists the columns that will appear in the output. Three of the columns ("LEVEL", "NAME", and "QTY") are special and have supporting code to populate their values. All others are presumed to be Pro/ENGINEER parameters and are treated as such.

The comppath_seq and root properties are used to transform component feature objects into model objects via the pfcComponentPath class.

When the appdata object has been setup, it is passed to GetBOMData, which returns an array of "model arrays". Each "model array" contains information about each part or assembly that was encountered in the BOM. This array of arrays is assigned to the "values" property of the appdata object.

The object is then passed to SendData(), which will attempt to put the data into Excel.

function GetData () {

if (!pfcIsWindows())
netscape.security.PrivilegeManager.enablePrivilege("UniversalXPConnect");

var data_elem = document.getElementById("data");
var session = null;
var model = null;

// Get session object
try { session = pfcGetProESession(); }
catch (e) {
data_elem.innerHTML = "ERROR: Cannot connect to Pro/Engineer session!";
return;
}

// Make sure there is a model active
try { model = session.CurrentModel; }
catch (e) {
// probably no model
data_elem.innerHTML = "Problem getting current model info.";
return;
}

data_elem.innerHTML = "<br>" + "Top Level Model: " + model.FileName;

// Setup appdata object for bom data
var appdata = new Object();
appdata.params = new Array( "LEVEL", "NAME", "QTY", "DESC", "PROI_CREATED_ON" );
appdata.comppath_seq = new pfcCreate("intseq");
appdata.root = model;

// get bom data as an array of arrays
appdata.values = GetBOMData(model, appdata);

// send bom data
SendData(appdata);
}



 

The Recursive Function

The GetBOMData() function recursively gathers the BOM data for an assembly. There are three main actions performed in this function: attribute gathering, recursing (for subassemblies), and quantity adjustments.

Before the current model attribute gathering, the parent name of the currently encountered component is stored. The logic used here flattens the tree structure of the assembly into a single array. In order to adjust the quantity, the parent needs to be tracked in order to adjust the quantity for the current level only.

In the attribute gathering code, you'll see code handling the three special attributes: level, name, and qty. Name is simply the model name. Qty is used here only for the top-level object, which always has a quantity of one. Level is calculated from the comppath_seq property. The ComponentPath is essentially an array of feature id's that let you walk through the assembly structure to a specific component. The length of the array indicates the component level in the assembly.

Any other items encountered in the params property of the appdata object is assumed to be a Pro/ENGINEER parameter and the GetParam() method is used to obtain its object. A try block handles the situation where there is no parameter of that name and a default value is used instead.

In the recursing section, which is skipped if the encountered model is a part, the code loops through all of the assembly components. The are four main actions performed in the loop. First addressed is building the ComponentPath, by appending the component's feature id, which gives the pfcModel object of the component. Second is determining whether to recurse, and handling the resulting arrays if it does. Components are not processed more than once at a given level. In the third action, the quantity count is initialized, if necessary, and incremented. Finally, the component id is removed from the comppath_seq.

The final task in GetBOMData() is to adjust the quantity. This is done by looking up component names in the qtyCount associative array. This is done only for components returned from recursive calls, which explains why the loop starts at index 1 not 0. A component cannot know how many times it is assembled. This can only be known from the subassembly level.

Finally, the array of model_arrays is returned back the previous level.

function GetBOMData ( model, appdata ) {

var data_elem = document.getElementById("data");
var status_elem = document.getElementById("status");

var model_array = new Array(); // data for this model
var return_array = new Array(); // array to store model_array's

// Assign parent attribute for qty count
//
try {
model_array["PARENT"] = appdata.parent.FileName;
}
catch (e) {
// ignore exception, probably top-level asm
model_array["PARENT"] = "";
}


// Get params of current model
//
for (var i = 0; i < appdata.params.length; i++) {

if (appdata.params[i] == "LEVEL") {
model_array["LEVEL"] = appdata.comppath_seq.Count+1;
}
else if (appdata.params[i] == "NAME") {
model_array["NAME"] = model.FileName;
}
else if (appdata.params[i] == "QTY" && model == appdata.root) {
model_array["QTY"] = 1;
}
else {
var param = null;
var paramvalue = " -- n/a -- ";

try {
// get parameter object
param = model.GetParam(appdata.params[i]);

// get parameter value
switch (param.Value.discr) {
case pfcCreate("pfcParamValueType").PARAM_STRING:
paramvalue = param.Value.StringValue;
break;
case pfcCreate("pfcParamValueType").PARAM_INTEGER:
paramvalue = param.Value.IntValue;
break;
case pfcCreate("pfcParamValueType").PARAM_BOOLEAN:
if (param.Value.BoolValue)
paramvalue = true;
else
paramvalue = false;
break;
case pfcCreate("pfcParamValueType").PARAM_DOUBLE:
paramvalue = param.Value.DoubleValue;
break;
}
}
catch (e) {
// param probably doesn't exist, ignore
}

// store param value in model array
model_array[appdata.params[i]] = paramvalue;
}
}

// store model array in return array
return_array.push(model_array);


// Recurse into components, if model is an assembly
//
if ( model.Type == pfcCreate("pfcModelType").MDL_ASSEMBLY ) {

var compMdl = null;
var qtyIndexName = null;
var qtyCount = new Array();

// get component sequence of current subasm
var components = model.ListFeaturesByType( false, pfcCreate("pfcFeatureType").FEATTYPE_COMPONENT );

// loop through components
for (var i = 0; i < components.Count; i++) {

var compFeat = components.Item(i);

if (compFeat.Status != pfcCreate("pfcFeatureStatus").FEAT_ACTIVE) {
continue; // skip inactive components
}

// Append component id to sequence (for building ComponentPath)
appdata.comppath_seq.Append(compFeat.Id);

// get model object of component
try {
// have to create ComponentPath object first, then use "Leaf" property
var cp = pfcCreate("MpfcAssembly").CreateComponentPath( appdata.root, appdata.comppath_seq );
compMdl = cp.Leaf;
} catch (e) {
status_elem.innerHTML += "<br> CreateComponentPath() exception: " + pfcGetExceptionType(e);
}

// using a unique index (subasm & comp names) for the qty count array
qtyIndexName = model.FileName+"/"+compMdl.FileName
appdata.parent = model;

// Descend into subassembly, if model has not been processed in this subasm
if ( !(qtyIndexName in qtyCount) && compMdl != model ) {
// concatenated arr into return_array (concat doesn't seem to work)
var arr = GetBOMData(compMdl, appdata);
for (var j=0; j<arr.length; j++) {
return_array.push(arr[j]);
}
arr = null;
}

// initialize and increment qty count for this subasm/component
if ( ! (qtyIndexName in qtyCount) ) {
qtyCount[qtyIndexName] = 0;
}
qtyCount[qtyIndexName]++;


// Remove last id in sequence, not needed anymore
try {
appdata.comppath_seq.Remove( (appdata.comppath_seq.Count-1), (appdata.comppath_seq.Count) );
} catch (e) {
status_elem.innerHTML += "<br> comppath_seq.Remove exception: " + pfcGetExceptionType(e);
}

} // Loop: components


// process arrays (for qty adjust) returned from GetBOMData() call
for (var i = 1; i < return_array.length; i++) {

var compName = return_array[i]["NAME"];
qtyIndexName = model.FileName+"/"+compName;

// Adjust qty for current level objects
if (return_array[i]["PARENT"] == model.FileName) {
for (var j = 0; j < appdata.params.length; j++) {

// make sure qty was requested
if (appdata.params[j] == "QTY") {
if (qtyIndexName in qtyCount) {
return_array[i]["QTY"] = qtyCount[qtyIndexName];
}
else {
return_array[i]["QTY"] = 1;
}
}

}
}
}

qtyCount = null;

} // model.Type

return return_array;
}



 

The Sending Function

The SendData() function is used to send the data to Excel (Windows) or to the browser (Unix).

On Windows, the code gets an Excel session object, either from an existing session or by starting a new one, if necessary. Your IE security settings may cause a new session to be started every time. A new workbook is created, and the data is written to the cells, headers first, then data rows.

The column header values are pulled from the params property array of the appdata object. These values are used to look up values in each model_array from the values property. You'll note that Excel cell indexes start at 1 and not 0 as with the JavaScript arrays.

On Unix, the data is written to the "data" div field on the HTML page, also using the params property for the headers and values property for the parameter values.

function SendData ( appdata ) {

var oXL = null;
var data_elem = document.getElementById("data");

if (appdata.values.length == 0) {
data_elem.innerHTML = "No data to send!";
return;
}

if (pfcIsWindows()) {

// Get/Create Excel Object Reference
try {
oXL = GetObject("","Excel.Application"); // Use current Excel session
}
catch (e) {
// couldn't get an excel session, try starting a new one
try {
oXL = new ActiveXObject("Excel.Application"); // Open new Excel session
}
catch (e) {
// couldn't start a new excel session either
}
}

if (oXL == null) {
data_elem.innerHTML = "Could not get or start Excel session!";
return;
}

// Create new workbook
try {
oXL.Visible = true;
var oWB = oXL.Workbooks.Add();
var oSheet = oWB.ActiveSheet;
}
catch (e) {
data_elem.innerHTML = "Problem creating new workbook.";
return;
}

// Write header cells
for (var i=0; i < appdata.params.length; i++ ) {
oSheet.Cells(1, i+1).Value = appdata.params[i];
}

// Write data cells
for (var i=0; i < appdata.values.length; i++ ) {
for (var j=0; j < appdata.params.length; j++ ) {
oSheet.Cells(i+2, j+1).Value = appdata.values[i][appdata.params[j]];
}
}
}
else {

// Not a windows platform, write data to browser

// Write header cells
data_elem.innerHTML += appdata.params.join(" &nbsp; / &nbsp; ");

// Write data cells
for (var i=0; i < appdata.values.length; i++ ) {
data_elem.innerHTML += "<br>";
for (var j=0; j < appdata.params.length; j++ ) {
if (j > 0) { data_elem.innerHTML += " / "; }
data_elem.innerHTML += appdata.values[i][appdata.params[j]];
}
}

}

}


 

The Cleanup Function

The Clear() function is very simple. It just blanks the content div fields.

function Clear() {
var data_elem = document.getElementById("data");
var status_elem = document.getElementById("status");
data_elem.innerHTML = "";
status_elem.innerHTML = "";
}


 

The code is somewhat more complex than I had expected, but this is largely due to the quantity adjustment. Strip out this and the code is signifcantly more terse, but less functional of course. I have a enhanced version of this application that gets the attribute data from a list in a textfield. This is a bit more practical because it allows for changes at runtime without having to edit the code. If there is interest, I will discuss those changes.


Questions and comments are always welcome, either here on my blog or at MarcMettes@InversionConsulting.com.

Tuesday, April 22, 2008

Intralink SQL: Monitoring User Passwords Part 2

Continued from Intralink SQL: Monitoring User Passwords Part 1

The Trigger:

Finally, we're ready for the trigger itself. It's a bit long and I'll try to explain it in sections. The main three sections define the trigger name and when it is executed, declarations of cursors and variables, and the trigger body.

In this example, 'create or replace' allows us to create a new trigger or replace an existing trigger, without having to first drop the trigger. 'before update of userpassword' indicates that the trigger runs when the userpassword field of pdm.PDM_USER is changing, but not if the email address is changing. We are capturing each row change as it happens using 'for each row'.

In the declare section, two cursors and two variables are defined. The cursors allow looping through (potentially) many values in a table. The cursors here are used for looping through the old passwords less than 180 days old, and all bad passwords. The cursors are limited to rows related to the user undergoing the password change.

In the body of the trigger, virtual tables ":old" and ":new" are often used. As you might suspect, they represent the old values of the table row and the new values of the table row. The trigger can see both and decide upon what action to take.

Much of the trigger below is wrapped in an if-then block. This block determines whether the user being changed is the same as the user doing the change. Only an admin can change another user's password. The 'if' statement figures out whether the user being changed is an admin, and if not, applies the rules of the trigger. If this 'if' statement (and its 'endif') are removed, the trigger applies to all accounts, even admin accounts.

The next if-then block prevents the user from reusing the current password (old password = new password) with 'raise value_error'. This introduces a fictitious error, which stops any further execution. The password change is denied. The user will get a cryptic error message, and there is no way to better inform them why. The error message is semi-obvious for most users that the password did not get changed.

The next two chunks of code open the old password and bad password cursors. The trigger loops through each row in each cursor looking for values in order to 'raise value_error'. The code to process the cursors is nearly identical, even though the cursors are a little different.

The insert statement writes the necessary information to the tracking table. It mostly uses the new table row values, except for the ID which is pulled from the next available value of the sequence. For this trigger, regardless of admin change or not, all successful password changes are recorded to the tracking table.

To create the trigger, copy and paste everything from 'create' down through and including 'run;' into an sqlplus session. 'run' is not actually part if the trigger. It is needed by oracle to compile the trigger. It doesn't actually run the trigger at that time.


create or replace trigger pdm.custom_pwchange_track_trg
before update of userpassword on pdm.PDM_USER
for each row
declare
cursor old_passwords(uid int) is
select userpassword from pdm.custom_pwchange_track
where userid=uid and modifiedon>=(sysdate-180);
old_pw old_passwords%rowtype;
cursor bad_passwords(uid int) is
select userpassword from pdm.custom_bad_passwords
where userid=uid;
bad_pw bad_passwords%rowtype;
begin
--
-- if a non-admin user is changing their own password, apply rules
if (:new.username = :new.modifiedby AND :new.usertype != 1) then
--
-- If new password is old password, raise exception
if (:new.userpassword = :old.userpassword) then
raise value_error;
end if;
--
-- Read bad passwords, make sure they are not used
open bad_passwords(:new.userid);
loop
fetch bad_passwords into bad_pw;
exit when bad_passwords%notfound;
if bad_pw.userpassword = :new.userpassword then
raise value_error;
end if;
end loop;
close bad_passwords;

--
-- Read old password values for this user, make sure
-- they do not reuse a password
open old_passwords(:new.userid);
loop
fetch old_passwords into old_pw;
exit when old_passwords%notfound;
if old_pw.userpassword = :new.userpassword then
raise value_error;
end if;
end loop;
close old_passwords;
--
end if;
--
-- Insert new password into tracking table
insert into pdm.custom_pwchange_track
(pwcid, userid, userpassword, modifiedby, modifiedon)
values
(pdm.custom_pwchange_track_seq.nextval, :new.userid,
:new.userpassword, :new.modifiedby, :new.modifiedon);
end;
.
run;
When the trigger is in place, if there is some form of error, this command will help (though only a little):

show errors trigger pdm.custom_pwchange_track_trg;

These commands can be used to report information about the trigger:

column OWNER format a8
column table_name format a30

select OWNER, TRIGGER_NAME, TRIGGER_TYPE,
TABLE_OWNER'.'TABLE_NAME as table_name, status
from
all_triggers
where
TRIGGER_NAME like 'custom_PWCHANGE%'
;

Should the trigger need to be disabled or re-enabled use these commands:

alter trigger pdm.custom_pwchange_track_trg disable;
alter trigger pdm.custom_pwchange_track_trg enable;

You may want to purge the old password table of old passwords, without dropping the table altogether. This is not absolutely necessary as the trigger will ignore password more than 180 days old.

To delete old passwords use this command:
delete from pdm.custom_pwchange_track where modifiedon<sysdate-180;

Next time: Using Intralink Scripting to Change Passwords

Monday, April 21, 2008

Intralink SQL: Monitoring User Passwords Part 1

PTC provides very little control over passwords, and unfortunately expiration dates cannot be set from within the GUI. However, it can be accomplished in the oracle backend using triggers. Not with C based Intralink triggers (you can't trigger on a password change), but with PL/SQL based oracle triggers.

The basic concept is to setup a trigger that monitors changes to the password column of the user table (PDM_USER). Generally, the trigger writes the username, date, and optionally the "encrypted" password to a separate tracking table.

In the simplest application, an sql query on the tracking table would report users who have not changed their password within the required period. A cron job or scheduled task can use this query to send nag messages to users. If desired, this process could update the user's password field in the PDM_USER table to a value that would effectively disable the account.

A more complex trigger can prohibit password reuse, disallow known bad passwords, purge old "good" passwords from the database, and potentially send emails itself.

In this two part example, I will show you how to setup a trigger that tracks passwords, prohibits "known bad passwords", and prohibits some password reuse. 180 days is used here as a threshold, but any numeric values can be used, just be consistent.


Tracking Table:

You need a table to store the user/password/date information. The following code will create a table using datatypes from the PDM_USER table itself in the form of a query.


create table pdm.custom_pwchange_track unrecoverable as
select
userid as pwcid,
userid,
userpassword,
modifiedby,
modifiedon
from
pdm.PDM_USER
where
userid=-1
;

Even though the query doesn't return any values, it does provide a table structure with which to build a table. By matching datatypes with the PDM_USER table, we don't need to worry about the data size of the columns in the new table because they will be the same.


The table columns and sizes can be verified with 'describe':
describe pdm.custom_pwchange_track;

Here are a few examples of queries to get useful info:
-- set column widths (for two queries below)
--
column UserName format a15
column UserPassword format a15
column ModifiedBy format a12
column ModDateTime format a20
column lastchange format a20


-- report password history, all users
--
select PWCID, a.userid, username, a.userpassword, a.MODIFIEDBY,
to_char(a.modifiedon,'DD-MM-YY HH:MI:SS') moddatetime
from pdm.custom_pwchange_track a, pdm.pdm_user b
where a.userid=b.userid;


-- report users who have not changed their password in 180 days
--
select username,
to_char(max(a.modifiedon),'YY-MM-DD HH:MI:SS') lastchange
from pdm.custom_pwchange_track a, pdm.pdm_user b
where a.userid=b.userid
group by username having max(a.modifiedon)<sysdate-180;

Bad Password Table:

The bad password table concept is a little tricky to implement. Since the password is encrypted, the bad encrypted passwords would need to be stored for each user. A "new user" script or process, could go through the motions of changing the new user's password to a list of bad passwords, before changing to the password given to the user.

With the tracking table and trigger in place, the bad passwords can be pulled from the tracking table and inserted into the bad password table. As I said, it's a little tricky, but it can be done without too much effort. Might make for a big table if you get too strict about "bad" passwords.


Code to create the table, again, based on a query:
create table pdm.custom_bad_passwords unrecoverable as
select
userid as bpid,
userid,
userpassword
from
pdm.PDM_USER
where
userid=-1
;

Commands to verify the table:
describe pdm.custom_bad_passwords;

select * from pdm.custom_bad_passwords;


-- Take values from old password table for user 'fred' and insert
-- into bad password table
--
insert into pdm.custom_bad_passwords (bpid, userid, userpassword)
select pwcid, userid, userpassword from pdm.custom_pwchange_track
where userid=(
select userid from pdm.pdm_user where username='fred'
)
;

Sequence for unique IDs:

Typically a well designed database uses sequences to generate unique integer ids. This allows differentiation between each record in a table. In this case, it is not absolutely necessary, because we are not linking multiple tables together, but it's generally a good thing and very easy to do.

Code to create the sequence:
create sequence pdm.custom_pwchange_track_seq
increment by 1
start with 1
;

Commands to verify the sequence:
select SEQUENCE_OWNER,SEQUENCE_NAME,
MIN_VALUE,MAX_VALUE,INCREMENT_BY,LAST_NUMBER
from all_sequences
where SEQUENCE_NAME='custom_PWCHANGE_TRACK_SEQ';

Next time, implementing the trigger in Intralink SQL: Monitoring User Passwords Part 2.

Tuesday, April 15, 2008

Intralink Scripting: Creating Pro/ENGINEER Trail Files in Java

If you're trying to automate some Pro/ENGINEER activity, the use of trail files is a solid and reliable method. Whether you are trying to automate interference checks, convert files to ProductView or PDF, or just verify a large number of family tables, trail files work great. This is assuming that you don't need any interaction between Intralink and Pro/ENGINEER, during the trail file execution.

The first thing you need is a trail file template. This is simply a trail file that you have recorded. The content will be changed slightly to include variable names where real values will be substituted at runtime.

For example, this:
~ Activate `main_dlg_cur` `ProCmdModelOpen.file`
~ Select `file_open` `Ph_list.Filelist` \
1 `abc.prt`

Becomes this:
~ Activate `main_dlg_cur` `ProCmdModelOpen.file`
~ Select `file_open` `Ph_list.Filelist` \
1 `@NAME@`


One thing to note about trail files, they contain a lot of "fluff", which are lines that you don't need, but got recorded. You won't know what you need and don't need until you try to remove it. My advice is to comment lines you think you don't need, by placing an exclamation point as the first character of the line. Pro/ENGINEER will ignore these lines. If the trail file plays as expected, the line is probably not needed.

This means that any line starting with an exclamation point can be eliminated. The one exception is the first line of the file. You need this, don't remove it. Generally, I leave some of the commented lines that were originally recorded in order to remind myself when an action has started or finished.

There is a config.pro option that will help prevent trail file fluff. Set CMDMGR_TRAIL_OUTPUT to YES and Pro/ENGINEER will reduce the number of lines and increase readability substantially.

Here is an example of a trail file template that records a timestamp, opens a file, records another timestamp, and then exits. You'll notice the comments indicating "blocks" of actions.


!trail file version No. 1301
!!!
!!! Timestamp
!!!
~ Select `main_dlg_cur` `MenuBar1` \
1 `Info`
~ Select `main_dlg_cur` `Info.cb_info_session`
~ Close `main_dlg_cur` `MenuBar1`
~ Close `main_dlg_cur` `Info.cb_info_session`
~ Activate `main_dlg_cur` `psh_util_time`
!!!
!!! Open model
!!!
~ Activate `main_dlg_cur` `ProCmdModelOpen.file`
~ Select `file_open` `Ph_list.Filelist` \
1 `@NAME@`
~ Activate `file_open` `Open`
!Command ProCmdModelOpenExe was pushed from the software.
!!!
!!! Timestamp
!!!
~ Select `main_dlg_cur` `MenuBar1` \
1 `Info`
~ Select `main_dlg_cur` `Info.cb_info_session`
~ Close `main_dlg_cur` `MenuBar1`
~ Close `main_dlg_cur` `Info.cb_info_session`
~ Activate `main_dlg_cur` `psh_util_time`
!!!
!!! Exit
!!!
~ Select `main_dlg_cur` `MenuBar1` \
1 `File`
~ Close `main_dlg_cur` `MenuBar1`
~ Activate `main_dlg_cur` `File.psh_exit`
! Message Dialog: Warning
! : Do you really want to exit?
~ FocusIn `UI Message Dialog` `no`
~ FocusIn `UI Message Dialog` `yes`
~ Activate `UI Message Dialog` `yes`
!End of Trail File

Now that we have a template, we need some java code to create the trail file that Pro/ENGINEER will run. There are many Java templating libraries that do some very complex manipulations, but for this simple example, we need very simple substitutions. The code listed below is quite simple, but works great for basic trail file creation.

Once variables have been established, the first task is to generate file handle objects for reading (using the BufferedReader/FileReader classes) and for writing (using the FileWriter class). Each line of the template file is read from using readLine() in the while loop. Then the value of the objName variable is substituted where @NAME@ is found in the template. The resulting line is written to the output file. Closing both the input and output file is very important, especially on Windows, to make them fully accessible.

String text;
String inputFile = "trl-template.txt";
String outputFile = "trl-output.txt";
String newline = System.getProperty("line.separator");

// Set up input and output files
BufferedReader templateInput = new BufferedReader(new FileReader(inputFile));
FileWriter trailOutput = new FileWriter(outputFile);

// Read through all lines in file
while ((text = templateInput.readLine()) != null) {

// Replace @NAME@ with objName String
text = text.replaceAll( "@NAME@", objName );

// Output modified template line to given output file stream
trailOutput.write(text + newline);

}

trailOutput.close();
templateInput.close();

Following best practices, the code should be placed in its own function, or, even better, its own class. Also, using an array or HashMap, instead of a simple String object for input, would allow for more extensive substitutions to occur during trail file generation, but the above is the core of a simple approach that works very well.

Sunday, April 13, 2008

WebLink: Sending Data from Pro/ENGINEER to Microsoft Excel with JavaScript

Occasionally data is needed in Microsoft Excel from Pro/ENGINEER, whether it is geometry values, parameter values, or BOM table contents. Data from any of these data sources can be sent directly to Excel with Pro/WebLink, without writing any external CSV files and without running any other applications.

Technically speaking it is JavaScript functionality, or JScript as Microsoft likes to call it, and not really Pro/WebLink at all. To be useful though, it will be running within the context of a Pro/WebLink application.

Listed below is a function that takes an array of (number or text) values and writes the data in a new Excel workbook. You'll probably find many examples on the Internet using Visual Basic having the same basic steps.

The most important part is getting the handle to an Excel session. The "new ActiveXObject()" call will start a new session of Excel, while the GetObject() call will obtain a handle to an existing Excel session. Depending on your Internet Explorer security settings (i.e. "Initialize and script ActiveX controls not marked safe for scripting"), you may have to use one or the other, but ideally both should work. Using an existing session is definitely more useful when sending data from Excel to Pro/ENGINEER.

After the handle is obtained, the session is setup to be visible with a new workbook (.xls file). A reference is then obtained to the active sheet. Using the "Value" property of a specific cell in the active sheet, we can put data into the cell, in this case from the array passed to the function.

function arrayToExcel ( array ) {
var oXL;

try {
oXL = new ActiveXObject("Excel.Application"); // Use new session
// oXL = GetObject("","Excel.Application"); // Use existing session
}
catch (e) {
alert("Excel must be running!");
return;
}

try {
oXL.Visible = true;
var oWB = oXL.Workbooks.Add();
var oSheet = oWB.ActiveSheet;
}
catch (e) {
alert("Problem creating new workbook.");
return;
}

for (var i=0; i < array.length; i++ ) {
oSheet.Cells(i+1, 1).Value = array[i];
}
}

Here is some example code that populates an array and calls the "arrayToExcel()" function:
var array = new Array();
array.push(1.11);
array.push(2.22);
array.push(3.33);
arrayToExcel(array);

As always, comments and questions are welcome.

Thursday, April 10, 2008

Intralink Scripting: Working with Dates in Locate

When recording dates during a Locate using the Scripting Options dialog, Intralink records it as a reference to the number of milliseconds since the beginning of 1970.

IL.addFilter( "Created On", ">=", new Object[]{new java.util.Date(1104555600000L)} ); 
It works fine, but with milliseconds it's a little hard to change to a specific date/time. It's still workable if your dates are relative to now, for example, models that have changed in the last 24 hours.

Date onedayago = new Date( (new Date()).getTime() - 24*3600*1000 );
IL.addFilter( "Created On", ">=", new Object[]{onedayago} );
Creating a new Date object sets it to the date/time right now and the getTime() method returns the number of milliseconds since 1970 for that object. For our "24 hours ago" Date object, we just need to subtract 24*3600*1000 milliseconds from that value and use it to create our object.

This is just quick and dirty calculations, if you need more precise calculations, you'll need to turn to Calendar based classes.


If we need to specify absolute date/time values, we'll have to incorporate the GregorianCalendar class into the mix. This class is the "official" way to specify absolute dates, provided you follow the Gregorian calendar.

The first step is to create the two objects and assign date/time values. Here were are setting year, month, day, hour, minute, and second values, but there are other options, such as add(), roll(), and setTime():
GregorianCalendar cal1 = new GregorianCalendar();
GregorianCalendar cal2 = new GregorianCalendar();
cal1.set(2008,0,1,0,0,0); // 2008-01-01 00:00:00
cal2.set(2008,2,15,23,59,0); // 2008-03-15 23:59:00

For the addFilter() method, we'll use the getTime() method on each GregorianCalendar object, which returns a Date object:
IL.addFilter( "Created On", ">=", new Object[]{ cal1.getTime() }
IL.addFilter( "Created On", "<=", new Object[]{ cal2.getTime() } ););

Intralink SQL: Changing Revision/Version with Oracle SQL

A PTC/User post of mine from April 2003

"Mongilio, Michael" wrote:

Does anyone know if it is possible to "successfully" change an objects revision to an earlier revision through Oracle? I have an object with two revisions "F" and "G" in the database. They should be "B" and "C". It would be a real hassle to delete both revisions and put them back in (especially since they are instances on a family table).

The following is a procedure to do what you asked for. The changes may work successfully, or they may not. Use them at your own risk. Definitely try them on a test server before messing with your production server.

To see all revisions (and all branches) of 'abc.prt':

set linesize 120
column PINAME format a35
column BRPATH format a20
column PIVREV format a6

select
piv.PIVID,
piv.PIVCLASS,
pi.PINAME,
br.BRPATH,
piv.PIVREV,
piv.PIVVER
from
pdm.PDM_PRODUCTITEMVERSION piv,
pdm.PDM_BRANCH br,
pdm.PDM_PRODUCTITEM pi
where
pi.PIID=br.PIID and
piv.BRID=br.BRID and
pi.PINAME='abc.prt'
;

Note: The value of PIVCLASS is 0, 1, or 2
  • 0: Non family table objects
  • 1: Instances
  • 2: Generics

To change the 'main' branch Revision/Version of 'abc.prt' from
'B.0' to 'A.3':
update pdm.PDM_PRODUCTITEMVERSION
set
PIVREV='A',
PIVVER=3,
MODIFIEDON=sysdate
where
PIVID IN (
select
piv.PIVID
from
pdm.PDM_PRODUCTITEMVERSION piv,
pdm.PDM_BRANCH br,
pdm.PDM_PRODUCTITEM pi
where
piv.BRID=br.BRID and
pi.PIID=br.PIID and
pi.PINAME='abc.prt' and
piv.PIVREV='B' and
piv.PIVVER=0 and
br.BRPATH='main'
)
;

In this example, the MODIFIEDON column is updated as well as the Revision/Version. If this does not occur, Intralink clients that have the revision cached (displayed in a browser window) get very confused and generate error messages, since they don't know to update from the Commonspace. The column is a 'freshness' timestamp that the client uses to determine whether its cached data is current or out of date.

Also, be sure that you are updating the correct branch. You may have multiple files at 0.0, but only one will be in the 'main' branch.


I'm sure what I am asking is not recommended, but...

Highly not recommended, but possible. If you are not careful, it is easy to make two revisions both have the same Revision/Version (i.e. both could have 2.1), to change the order (i.e. 4.0 becomes 3.6 and 4.1 becomes 3.5), and to set a Revision to something that is not in the list of valid revisions. Trying to do any of these is a really bad idea.

Changing these values also causes some issues with workspaces that have the files checked-out. The revision info in the workspace will no longer correspond to the revision info for the same version in the commonspace, unless the file is checked-out again.

Some unusual conflicts may be produced upon check-in, as later versions may now be present in the commonspace even though the workspace version really is the latest. These conflicts can be overridden with apparently no problems.


As long as you're careful, the procedure should not create any major problems.

Wednesday, April 9, 2008

WebLink: Hello World!

The Classic First Example Program for WebLink

Getting started with WebLink is a challenge because there aren't many examples provided by PTC. There are a few good examples in there, but they represent only a small fraction of the API. This together with the sometimes complex browser security issues can make WebLink seem unapproachable.

In the interest of getting you up and running with WebLink, I'll discuss some of the major hurdles and provide a good, basic starting point.

On Windows, the Internet Explorer browser security model may require the following:

  • WebLink HTML page must be served by a web server
  • URL may need to be fully qualified (i.e. http://srv1.yourcompany.com/... not just http://srv1/...)
  • Internet Explorer should consider your web server as a "trusted host"
  • config.pro option WEB_ENABLE_JAVASCRIPT must be set to ON


For WebLink that may be enough, but if using other COM objects, such as interfacing with MS Excel for example, changes to IE security settings may be required to grant your application more privileges. This is also a security risk, so be careful when doing this.


The Hello World example is self contained other than loading the (PTC provided) pfcUtils.js file. This file can be placed in the same folder on the web server as the HTML file. The example has a small form containing an anchor tag (with id of "mesg") and a button. The button executes the HitMe() function which populates the contents of the anchor tag.

In nearly every application the pfcGetProESession() function is called, which is contained in the pfcUtils library file. The try/catch block around it verifies that the embedded browser is used, which is essential.

Once we have the session object, the current model object is obtained, from which we can get the name of the model. This value is displayed along with a message in the anchor tag. The try/catch block here helps verify that there is an active model (part, assembly, or drawing), because "model" will be null if there isn't. Trying to call any method against null is pretty much guaranteed to throw an exception.

WebLink Hello World Example:

<HTML>
<SCRIPT LANGUAGE="JavaScript" type=text/javascript src="pfcUtils.js"></SCRIPT>
<BODY>

<SCRIPT LANGUAGE="JavaScript">

function HitMe() {

if (!pfcIsWindows())
netscape.security.PrivilegeManager.enablePrivilege("UniversalXPConnect");

var form_elem = document.getElementById("mesg");

var modelname = "no model";
var session = null;
var model = null;

try {
session = pfcGetProESession();
}
catch (e) {
form_elem.innerHTML = "ERROR: Cannot connect to Pro/Engineer session!";
return;
}

try {
model = session.CurrentModel;
modelname = model.FileName;
}
catch (e) {
// probably no model
form_elem.innerHTML = "Make sure a model is active!";
return;
}

form_elem.innerHTML = "Hello! My name is: " + modelname;

}

</SCRIPT>

<form name="f">
<INPUT name="btn1" type=button value="Hit me!" onclick="HitMe()">
<br><a id="mesg"></a>
</form>
</BODY>
</HTML>

If you have any questions, please ask. Comments and questions are welcome.

Tuesday, April 8, 2008

WebLink: What Is It Anyway?

A discussion of what WebLink is, and is not

WebLink is one of the two free API's (JLink being the other) provided by PTC to create Pro/Engineer applications. According to PTC's documentation, WebLink is a Javascript based library for use within the embedded web browser.

That's sounds great, but it's about as accurate as describing a car as a pothole creator. Well, here in Detroit that's more true than not, especially on Van Dyke ... but I digress.

Truthfully, WebLink isn't really Javascript based at all. It's based on Microsoft's COM/ActiveX/OLE objects on Windows, and Mozilla's XP-COM on Unix. You don't even need to use Javascript. On Windows, you can use VBScript, Perl, Python, or any one of your favorite languages that can access COM objects.

Here's an example of using VBScript in a WebLink application:

<HTML><BODY>
<SCRIPT type="text/vbscript">
function HitMe ()
dim mdlname
Set pfccomglob = CreateObject("pfc.MpfcCOMGlobal")
mdlname = pfccomglob.GetProESession.CurrentModel.FileName
Document.getElementById("mdlname").innerHTML = "Name: " & mdlname
msgbox("Name: " & mdlname)
end function
</SCRIPT>
<form name="f">
<INPUT name="a" type=button value="Hit me!" onclick="HitMe()">
<div id="mdlname"></div>
</form>
</BODY></HTML>

To make matters worse, you don't even need to use the web browser! A Pro/Toolkit DLL can access the WebLink COM objects using C/C++ and can do so completely outside the confines of the web browser. The Pro/Toolkit DLL could also go as far as hosting your favorite scripting engine. The web browser is just one of many environments in which a WebLink application can run.

Here's the same example using Perl to access WebLink COM Objects from within a Pro/Toolkit DLL:
use Win32;
use Win32::OLE;
$pfccomglob = Win32::OLE->new('pfc.MpfcCOMGlobal');
$mdlName = $pfccomglob->GetProESession->CurrentModel->FileName;
Win32::MsgBox("Name: $mdlName");

To summarize, WebLink requires neither Javascript nor a web browser. That clears things up, right? Why is it called "WebLink"? Well, you can use it in a web browser, and all PTC software has to have the word "Link", as a result: WebLink!

Can you imagine the response had they called it Pro/COM, or ActiveXLink, or COM.Link? A resounding "huh?" would have echoed through the halls on Kendrick Street. I think the marketing guys got it right this time.

Anyway, using whatever language in whatever environment you desire, the huge benefit to WebLink is that it's a rapid prototyping system for Pro/Engineer applications. The code-test-code-test cycle can be repeated as quickly as you can refresh the web browser. Whether you intend to migrate the application to JLink, or leave it in the web browser, you can get a complex application written very, very quickly. That's what makes WebLink very powerful.

Learn it, you won't regret it.

Sunday, April 6, 2008

Intralink Scripting: To Be or Not To Be, How to Best Answer the Question

Checking for Object Existence with CSPIObjectInfo and CSPIVObjectInfo

When processing Intralink objects in Java from a non-Intralink source, such as a text file or database, the typical process is to use Locate to search for the item to ensure that it exists. While this is usually sufficient, there are times when the overheard of using Locate is undesirable. The good news is that there is an alternative using the ObjectInfo derived classes.

Continuing on my discussion of ObjectInfo classes, I'll discuss how to apply these classes to determine whether a PI (i.e. abc.prt) or a PIV (i.e. abc.prt/main/A/0) exists.

If we're checking for "my_assembly.asm", we'd follow these steps. First declare an object of type CSPIObjectInfo. Then attempt to create the object using the createByKey() method of the ObjectInfo class, providing the type as ObjectInfo.tPI and name in the variable piName.

Although I have it captured in a try{} block, it won't throw an exception if the PI doesn't exist, instead as you may have guessed from the code, in merely returns null. If the object is null, it doesn't exist.

String piName = "my_assembly.asm";
CSObjectInfo pi_oi = null;

try {
pi_oi = (CSObjectInfo)ObjectInfo.createByKey( ObjectInfo.tPI, piName );
if (pi_oi == null) {
System.out.println( "pi_oi is null" );
}
else {
System.out.println( "pi_oi getName(): " + pi_oi.getName() );
}
}
catch (Exception e) {
System.out.println( "CSObjectInfo createByKey exception: " + e.getMessage() );
}

To determine is a PIV exists (i.e. "my_assembly.asm", main branch, revision 1, version 0), ObjectInfo.tPIV is the corresponding type to use also with the createByKey() method of the ObjectInfo class. It also has the same behavior, returning null is the PIV does not exist.
String pivName = "my_assembly.asm/main/1/0";
CSPIVObjectInfo piv_oi = null;

try {
piv_oi = (CSPIVObjectInfo)ObjectInfo.createByKey( ObjectInfo.tPIV, pivName );
if (piv_oi == null) {
System.out.println( "piv_oi is null" );
}
else {
System.out.println( "piv_oi getName(): " + piv_oi.getName() );
}
}
catch (Exception e) {
System.out.println( "CSPIVObjectInfo createByKey exception: " + e.getMessage() );
}

The major weakness with this approach for PIV's is that you have to know of it's name, revision, and version ahead of time. If you want to know what the latest revision/version is, the Locate approach will give you both confirmation of existence and knowledge of the revision/version at the same time.

You'll have to choose when it makes sense to use it, but it remains a very powerful technique nonetheless.

Friday, April 4, 2008

Intralink Scripting: Font Size Does Matter!

Using Java Swing Utilities to make the Intralink GUI text more readable

The font in the Intralink client is a little small for most people, and as a result changing the font size is a question that pops up from time to time. Unfortunately, Intralink doesn't provide a way to do this directly, which makes most people just give up.

Since we're dealing with Java, almost anything is possible, and changing the font is one of those things. There's basically three main steps: 1) Setup the font object to use, 2) Tell the UIManager is use the new font details, 3) Force a full GUI redraw. It's easiest to do this when Intralink is starting, then the user is good to go from that point on.

Steps 1 & 2 are encompassed into one function setMenuFontSizeDefault(). The first task is to obtain the font object used for menus ("Menu.font"). Then the details (Name, Style, Size) are extracted. The function is changing only the size, but a new font could be used as well (check the comments). With the changed font details, a new FontUIResource object is created.

The next task is to associate the new font object with many different font attributes, obviously in what looks to be a brute force attack.

private void setMenuFontSizeDefault(int size) {

// get font style and name from current menu font
//
String BaseFontSource = "Menu.font";
Font fontCurrent = UIManager.getFont(BaseFontSource);
String name = fontCurrent.getName();
int style = fontCurrent.getStyle();

String styleName = "(?)";
if (style == Font.PLAIN) { styleName = "(Plain)"; }
if (style == Font.ITALIC) { styleName = "(Italic)"; }
if (style == Font.BOLD) { styleName = "(Bold)"; }

System.out.println( "Current " + BaseFontSource + " name/style/size: "
+ name + "/" + style + styleName + "/" + fontCurrent.getSize() );

// Override font name and/or style by uncommenting one of these
// name = "Times New Roman";
// style = Font.BOLD; styleName = "(Bold)";

System.out.println( "Changing name/style/size to: " + name + "/" + style + styleName + "/" + size );

// create similar font with the specified size
//
FontUIResource fontResourceNew = new FontUIResource(name, style, size);


// change UI defaults for all types of menu components
//
UIManager.put("Button.font", fontResourceNew);
UIManager.put("CheckBox.font", fontResourceNew);
UIManager.put("CheckBoxMenuItem.acceleratorFont", fontResourceNew);
UIManager.put("CheckBoxMenuItem.font", fontResourceNew);
UIManager.put("ComboBox.font", fontResourceNew);
UIManager.put("DesktopIcon.font", fontResourceNew);
UIManager.put("EditorPane.font", fontResourceNew);
UIManager.put("FormattedTextField.font", fontResourceNew);
UIManager.put("InternalFrame.titleFont", fontResourceNew);
UIManager.put("Label.font", fontResourceNew);
UIManager.put("List.font", fontResourceNew);
UIManager.put("Menu.acceleratorFont", fontResourceNew);
UIManager.put("Menu.font", fontResourceNew);
UIManager.put("MenuBar.font", fontResourceNew);
UIManager.put("MenuItem.acceleratorFont", fontResourceNew);
UIManager.put("MenuItem.font", fontResourceNew);
UIManager.put("OptionPane.buttonFont", fontResourceNew);
UIManager.put("OptionPane.messageFont", fontResourceNew);
UIManager.put("PasswordField.font", fontResourceNew);
UIManager.put("PopupMenu.font", fontResourceNew);
UIManager.put("ProgressBar.font", fontResourceNew);
UIManager.put("RadioButton.font", fontResourceNew);
UIManager.put("RadioButtonMenuItem.acceleratorFont", fontResourceNew);
UIManager.put("RadioButtonMenuItem.font", fontResourceNew);
UIManager.put("Spinner.font", fontResourceNew);
UIManager.put("TabbedPane.font", fontResourceNew);
UIManager.put("Table.font", fontResourceNew);
UIManager.put("TableHeader.font", fontResourceNew);
UIManager.put("TextArea.font", fontResourceNew);
UIManager.put("TextField.font", fontResourceNew);
UIManager.put("TextPane.font", fontResourceNew);
UIManager.put("TitledBorder.font", fontResourceNew);
UIManager.put("ToggleButton.font", fontResourceNew);
UIManager.put("ToolBar.font", fontResourceNew);
UIManager.put("ToolTip.font", fontResourceNew);
UIManager.put("Tree.font", fontResourceNew);
UIManager.put("Viewport.font", fontResourceNew);
UIManager.put("JTitledPanel.title.font", fontResourceNew);

}

Probably a better way, but it works!


Step 3 is accomplished with the updateAllFrames() function. With the frames gathered from Frame.getFrames(), the font change is forced upon the frame (and the frames windows, if any) using SwingUtilities.updateComponentTreeUI().

private void updateAllFrames () {

Frame frames[] = Frame.getFrames();

for (int i = 0; i < frames.length; i++) {

if ( frames[i].getTitle().equals("") frames[i].getTitle().equals("Pending RTP Forms") ) {
continue;
}

// Update the frames
try {
SwingUtilities.updateComponentTreeUI(frames[i]);
}
catch (Exception e) {
// Exception thrown, skip it
}

// Update the frame's windows
Window windows[] = frames[i].getOwnedWindows();
for (int j = 0; j < windows.length; j++) {
try {
SwingUtilities.updateComponentTreeUI(windows[j]);
}
catch (Exception e) {
// Exception thrown, skip it
}
}

}

}

The two function are called in this order, as an example:

setMenuFontSizeDefault(15); // set font size to 15
updateAllFrames();



These import statement might be necessary as well:

import java.util.*;
import java.awt.*;
import javax.swing.*;
import javax.swing.plaf.*;





Wednesday, April 2, 2008

Intralink Scripting: Processing Java Command Line Arguments

Using the Command Line in your Java Code

Accessing the command line arguments for a program is a very useful way to control its behavior during runtime. This is no different with Intralink, especially when running Intralink Scripting autonomously, where you can't prompt the user for information.

A little bit of digging will unveil the ILArgumentParser class. This allows access to the command line arguments used with Intralink. The following two functions are very helpful in working with this class.

Arguments are expected to be in this form:

ilink -- -loginfile login.txt

This function indicates whether a command line argument keyword exists:
public boolean isCmdLineArg ( String argName ) throws Exception {

String argSpec = "-" + argName.toLowerCase();

// Check command line args for argName
//
try {
ILArgumentParser ilargumentparser = Intralink.m_arguments;
if ( ilargumentparser.containsKey(argSpec) ) {
return true; // argName is a command line argument
}
}
catch (Exception e) {
// exception: command line arg does not exist, ignore
}

return false; // argName is not a command line argument
}

This function return a command line argument value:
public String getCmdLineArg ( String argName ) throws Exception {

String argValue = null;
String argSpec = "-" + argName.toLowerCase();

// Check command line args for argName
//
try {
ILArgumentParser ilargumentparser = Intralink.m_arguments;
if ( ilargumentparser.containsKey(argSpec) ) {
argValue = ilargumentparser.getValue(argSpec);
}
}
catch (Exception e) {
// exception: command line arg does not exist, ignore
}

return argValue;
}

Example usage:
String loginFile = "login.txt"; // default value
if (isCmdLineArg("loginFile")) { loginFile = getCmdLineArg("loginFile"); }