Uniface Libraries and Library Objects

 

Introduction

 

Certain objects in Uniface are members of a library.  Essentially, a library is a collection of these various objects, and each library and type of object has its own name space.  So, multiple libraries may contain a given object of the same name and type.  For example, a constant must belong to a library, but several libraries could all have constants named TRUE, which would be referenced in proc code as <true>.

 

Depending on the type of object, references to the object may be resolved either at runtime or at compile time.  Uniface generally looks in at most one of several libraries in a specific order and the first object it finds of the right name and type is taken to be the correct one.  The search order varies depending on the type of object. 

 

In the case of some object types, the search order is affected by the value of $language (both an assignment file setting and a proc function that can be set in code).  As its name suggests, $language supports the deployment of a single application that can present itself in one of several languages depending on which is appropriate to an enduser.  This topic deserves its own discussion, however, and because it does not impact much of what follows here will not be mentioned further.

 

Libraries also play a role in version control.  Depending on configuration of the Uniface Development Environment (UDE), Uniface allows either individual library objects to be checked in and out of version control (except for constants), entire libraries of a given object type to be checked in and checked out, or both.  This “clustering” can be quite useful and convenient.

 

This document seeks to provide information about the different types of library objects, the search order Uniface employs for each object type, and then comments on the most likely strategies that can be used to organize library objects.

 

 

Library Object Types and Usage Considerations

 

The following table lists library object types, describes each, and makes other comments as to how they are used and in what library order Uniface searches for them.

 

Object Type

Comments

Constant

Library constants are Uniface objects whose associated value is substituted into code. 

 

 

Library Search Order

Uniface substitutes the literal value of a constant at compile time.  A constant may in turn reference other constants which are then recursively substituted.  Constants are substituted for after include procs, so any attempt to reference a #include within a constant will result only in that literal string.

 

When compiling a component and encountering a constant (where no local component constant exists), Uniface looks first for the constant in the component library (defined in component properties).  If the constant is not found there, Uniface then looks for the constant in SYSTEM_LIBRARY.

 

Similarly when compiling a global proc or menu (which do not themselves have a component library), Uniface first looks for the constant in the same library as the one for which global proc or menu is defined, then if not found looks in SYSTEM_LIBRARY.

 

When compiling components, global procs, or menus, Uniface also checks its own internal USYS library of constants corresponding to possible $procerror statuses.  The actual USYS library constants are hidden, but these constants all begin with UPROCERR_ in the name, are documented in various places, and can be referenced in application code under appropriate circumstances.  The USYS library must not be populated or otherwise modified by developers.

 

In searching through the various libraries, note that it does not matter what default library is named in the UDE assignment file using the $variation setting.  And since a constant’s value is fixed at compile time, the runtime library, the runtime value of $variation, and the application startup shell library do not impact the library search order at all.

 

There is no syntax that allows the library to be explicitly specified when referencing a constant in proc code.

 

 

Appropriate Usage

As the name would suggest, constants help to centralize specific literals (e.g., PI = 3.141592, TRUE = T, FALSE = F, OK = Ok).  Not only does this prevent a specific literal from having to be hardcoded throughout an application, but well-named constants are much more readable than what might otherwise appear to be an arbitrary number or string.  

 

Of course, library constants should not be used for values having meaning to only a single object, since local component constants or variables initialized to a value would be more appropriate.  Also, stored database values are preferable to constants for values reasonably subject to change, unless the change would require other program changes because of a change in Uniface versions.

 

Because constants are exactly substituted into code at compile time, they can be used to dereference core, shared routines which also tends to enhance readability.  For example, suppose there exists a service operation that does a string search and replace.  The developer would normally have to code something like:

            activate “XX_STRNGLIB”.srch_and_replace( input_string,                  search_string_replace_string, output_string )  

If a constant named <srch_and_replace> were given the value   activate “XX_STRNGLIB”.srch_and_replace,  then a developer could accomplish the same thing with this code:

            <srch_and_replace>( input_string, search_string_replace_string, output_string )

This has the benefit of giving developers an easy to remember, entirely meaningful way to invoke a general purpose routine and allows for some future flexibility in relocating important routines later.  This technique should be employed for only the most universal of routines (not specific to one application) and only when readability is well enhanced.

 

 

Naming Considerations

The latest version of Uniface Naming Conventions suggests that application or project specific library constant names optionally should be prefixed with a system name.

 

If multiple libraries are employed, general purpose constants should be meaningful only, e.g., TRUE for the constant <true>, so long as name will not be mistaken for a prefixed name.  (Ideally local component constant naming conventions will further prevent a general purpose library constant from being mistaken for a local one.)  In any case, constants relevant only to a particular project should be prefixed with the system name.

 

 

Miscellaneous Considerations

Constants are the only library objects that cannot be checked into and out of version control individually.  Instead, all constants for a given library must be checked in or checked out at the same time and on checkout all existing constants will first be deleted.

 

Further, constants must be exported by library.  Importing any file which contains a constant from a given library will delete all existing constants in that library from the UDE. 

Counter

Counter objects implement a Uniface scheme to generate a unique number based on the last number used.

 

 

Associated Problems

Counter objects do not reliably generate a unique next number when multiple user sessions attempt to access them at the same time.

 

Counter objects are not available from self-contained forms and services.

 

Counters are unavailable to non-Uniface routines like PL/SQL triggers that might update data.

 

The last number generated for a counter (which is used to generate the next number) is stored in the uobj.dol file.  On occasions when a meaningful data backup should include this number or on restore when this number must be synchronized with data or even when recreating the uobj.dol file due to a new version of Uniface or an application, then extra steps must be taken to populate the counter with the appropriate value.  So, any use of counters could require a good deal of special application support.

 

Due to the associated problems, counters are not further discussed.

Default Trigger

Default Triggers  are actually Uniface message objects which have names corresponding to any of the various Uniface triggers (the message name associated with each specific trigger is given at the end of the section documenting that trigger in the Proc Language Reference Manual - Volume 1). 

 

When a new Uniface object is created (except for entity and field triggers in component templates), any triggers which do not inherit from a template or from the application model and which correspond to a default trigger message have loaded into them the source code contained in the message.  Note that this is not the same as include procs or constants, for which code is substituted when compiling (leaving the source unaffected).

 

Code defaults into a trigger only when the object whose trigger it is gets created.  There is no persisting inheritance between triggers populated with default trigger code and default trigger messages.

 

 

Library Search Order

Uniface preferentially will look for a default trigger code message in the specific library and language named in the UDE assignment file (assignment file settings $variation and $language respectively), if any.  Otherwise, Uniface looks in its own set of default triggers that are installed with the product in the USYS library (the message source is hidden).  The USYS library should not be populated or otherwise modified by developers.

 

Because Uniface defaults trigger source only at creation time, the process is unaffected by component library, runtime library, the runtime value of $variation, and startup shell library.

 

 

Appropriate Usage

Default triggers are used implicitly, whether or not application developers define their own.  Application developers should define their own default trigger messages, however, keeping in mind they must be common to everyone sharing the development environment and depending on IU standards. 

 

If all components are created from templates, component level trigger defaults will have meaning only when the templates themselves are created and the defaults should be defined accordingly. 

 

If entities are created from entity templates (not an actual construct in Uniface but a process where one of several entities defined for that purpose is duplicated as the first step in defining a new entity), then the entity level trigger defaults should be appropriate to unmodeled component entities (that is, entities painted on the fly on forms).

 

All fields (in the model or not) and field templates (which likely are large in number compared to component and entity templates or are applied only to a minority of fields) directly or indirectly get the default field triggers so the defaults should be quite generic because it is tedious to constantly adjust field triggers.

 

 

————————————

See also Help Text / Message

Device / Keyboard Translation Table

Device translation tables define how Uniface outputs to printers and screens.  Keyboard translation tables define how user input maps to structure editor functions. 

 

 

Library Search Order

Except for the USYS library off limit to developers, Uniface looks for translation tables only in the library specified by $variation (whether set in the assignment file or by assigning the Uniface function).  Device translation tables provided by Uniface in the USYS library rarely need to be changed, however.  IU specific keyboard translation tables are much more likely.

 

 

Appropriate Usage

The use of these objects is unavoidable.  The standard Uniface keyboard translation tables should be used as a starting point for IU specific keyboard definitions (so that these can be changed over time and not stepped on when reimporting the standard Uniface translation tables which can change from release to release).

 

Keyboard translations tables can be used to define special userkeys for power users (who are comfortable with keystrokes), remap keystrokes to maximize consistency of the user interface between all Uniface and non-Uniface applications.  Indeed, generic functionality like automatic invokation of an issues tracking form can be supported by a userkey map.

 

The temptation to allow application developers to perform keyboard switching by assigning $keyboard in code should be avoided because these assignments are global to the session and forgetting to initialize the value each time (making the assignment file $keyboard setting useless) or forgetting to reset $keyboard can impact the runtime behavior of other objects.  Keyboard switch is especially tough in a non-modal environment where the keyboard might have to be changed every time a user clicked from one form to another.

 

 

Naming Considerations

The latest version of Uniface Naming Conventions suggests a completely meaningful name, but a system prefix probably helps provide meaning.  There should be fairly few such tables.

Global Proc

Global procs are compiled proc modules that run in the context of the current form component.  (Internal to the global proc but not directly referencable may be entry procs that follow the main proc module.)

 

 

Library Search Order

When Uniface processes a call statement to a given proc, it first attempts to locate a local entry proc in the current form.  Failing that, Uniface looks for a suitably named global proc in the form library (if specified under component properties).  If necessary, Uniface then looks in the application library (if one is specified in the application startup shell).  Finally, if the global proc has not been found, Uniface looks in SYSTEM_LIBRARY.

 

The library search order applied at runtime is unaffected by $variation settings (in proc code or the assignment file).  Also, a global could be called indirectly from a form by another global proc, but the search order continues to look first in the component library and not the calling global parent’s library.

 

There is no longer any documented method of explicitly specifying which library to look in when calling a global proc (although this technique was available before version 7).

 

 

Appropriate Usage

Uniface has long supported global procs as a way of centralizing algorithms, but other mechanisms including operations and include procs now make the use of global procs unnecessary.  Moreover, the Uniface implementation of compiled global procs is not consistent with self-contained non-form objects, and for that reason global procs cannot be called in fully encapsulated services and reports.  Nor can non-Uniface components directly access global procs, in contrast with service operations.

 

Nevertheless, global procs retain certain advantages over other centralization techniques.  Unlike include procs, they need not be compiled into each component where referenced and are loaded only once into a Uniface session.  Further, global procs have been acknowledged to be more efficient than service operations at runtime.  The standard Compuware Web App Server class recommends the use of global procs for web applications that must scale to arbitrarily many users.

 

A good technique to centralize behavior that must run in the context of the current component is to put the algorithm in an include proc, but then create a global proc of the same name that does a #include on the include proc.  This way, the efficiencies of global procs are realized for forms while the include proc itself which really centralizes the source code can be referenced in services and reports.  See the Include Proc discussion that appears later in this table.

 

Other than this approach of using global procs to “wrap” include procs, global procs should be avoided for the most part, except posssible in the case where 1) an algorithm must run in the context of the current form and 2) the algorithm is form specific so that putting the source in an include proc gains no additional benefits.  Even here, however, the use of global procs is not favored because the construct itself might disappear and because a global proc offers no flexibility should the algorithm become useful to non-form components.

 

 

Naming Considerations

The latest version of Uniface Naming Conventions suggests that global procs not be used at all.

 

When using a global proc as a deployment mechanism to wrap an include proc, however, the name of the global proc must match the name of the include proc.  Should global procs be used for other purposes, a system prefix is recommended for all project specific procs.  More general global procs might or might not have a system prefix (the prefix is essential if they share the same library with project specific global procs but unnecessary if segregated into a separate library).

 

A further naming convention that differentiated an algorithm contained in a global proc from a local proc or an include proc would aid readability.

Global Variable

Global variables can be examined and assigned to Uniface proc statements, and have an associated data type and layout.  They are scoped to the local Uniface session (even if that session is a Web App Server session shared by many users), so all components and other objects able to interact with them at runtime share the same set of global variables.

 

 

Library Search Order

The runtime search order for global variables differs somewhat from other library objects -- Uniface looks first in SYSTEM_LIBRARY, then the library named in the application startup shell (if any), then the component library.  The assignment file setting and Uniface function $variation does not affect this.

 

The search order is the inverse for other objects like global variables so that the most generic version of the variable is used, to minimize confusion in large part.

 

 

Appropriate Usage

In earlier versions of Uniface, except for the not very readable general variables ($1 - $99), global variables were the only good way to pass values between Uniface objects -- operations, parameter passing, and send/postmessaging did not exist.  Further, since local proc variables did not exist, global variables were often used internally within global procs (at a time when global procs were much more important).  From at least version 7.202 forward, however, little justification remains for using global variables.

 

Without exploring the details, it should be noted that some limited exceptions to the prohibition on global variables exist when parameter passing will not work due to timing issues and/or modality.  These cases are few and subtle so will not be elaborated further.

 

Also, because local proc variables cannot have a display format defined for them, global variables might still serve as useful tools for the formatting and string imbedded of values.  Component variables can serve the same purpose, but global variables have advantage where the formatting is useful throughout the forms in a system, especially including non-component objects like global procs which do not have associated with them component variables.

 

Under no circumstances should global variables be used to store information that requires any kind of persistence.  That is, assigning a global variable with a defined display format to a value and then immediately imbedding the global variable in a string is perhaps acceptable.  Setting a global variable at one point then interrogating the contents later is not viable.

 

Global variables cannot be used at all in self-contained services and reports.

 

In brief, global variables should be used only when they provide the only reasonably direct mechanism for some functionality.  It is always possible to avoid global variables, but sometimes the difficulty of doing so in effect rationalizes the use of global variables.

 

 

Naming Considerations

The latest version of Uniface Naming Conventions suggests an optional system name prefix for global variable names.  To aid readability and avoid the possible confusion that might arise should SYSTEM_LIBRARY and a project library have one or more global variables of the same name, the system name should be mandatory.

Glyph

Glyphs are stored images in a DBMS/operating system independent format proprietary to Uniface.  Glyphs can be used as command button icons, panel icons, form backgrounds, etc.

 

 

Library Search Order

Except for the USYS library off limit to developers, Uniface looks for glyphs only in the library specified by $variation (whether set in the assignment file or by assigning the Uniface function).

 

 

Appropriate Usage

Glyphs, which are not platform specific and which allow quick access relative to image files (e.g., bitmap files on disk), are a reasonable way to store some images used in a Uniface application, so long as non-Uniface applications need not access the images and so long as the user need not add or update images.  Quite often a good alternative with comparable performance is storing images in a database (e.g., BLOB images).

 

Glphys associated with panel buttons offer user hint text when a message of the same name in the same library is defined.

 

 

Naming Considerations

The latest version of Uniface Naming Conventions suggests a meaningful name optionally followed by a suffix that types the glyph according to use (for example, SAVE_C for a glyph used on a command button that saves data).  However, since a glyph may serve in multiple roles this might have lesser utility for some developers than other.  On the other hand, a system identifier should be included in the meaningful portion of the name so enhance code readability.

Help Text / Message

Help text and message objects contain text accessed using the Uniface proc function, $text, which does not discriminate between the two.  Although maintenance on help text and messages is distinct in the current version of the UDE and originally intended for different purposes, they work the same way except where noted in the discussion below.  Help text and messages share the same name space and either can be single or multi-line at this time (historically, only help text could be multi-line in earlier versions of Uniface).

 

 

Library Search Order

Except for the USYS library off limit to developers, Uniface looks for help text and messages only in the library specified by $variation (whether set in the assignment file or by assigning the Uniface function).

 

 

Appropriate Usage

Uniface allows form field labels to contain a literal or a $text reference to a message or help text object in order to extend language support to labels.  There are no good, clean, and efficient alternatives.

 

Messages provide the only way to provide hint text (tool tips) for panel buttons linked to an operation.  (Standard structure editor panel buttons have default hint text hidden in the USYS library but which could be overridden by an appropriate definition in the library named by $variation.)

 

Messages provide the only way to default trigger code.  This topic is covered in its own section above.

 

While help text and messages can centralize text for a variety of other reasons, they offer little advantage over simply storing messages in a table (other than speed and even that premium does not really matter much except in those appropriate uses just listed).  In fact, storing messages in a table offers far more flexibility since $text cannot be used in self-contained reports or services.  The centralization of actual dialogs into a file offers far more flexibility in the long run if the messaging mechanism itself is centralized in such a way that all component types and deployment approaches are supported, include web forms. 

 

Similarly, to achieve the same kind of functionality and interface found in non-Uniface applications, actual enduser application help more likely belongs in a format that can be interchanged between the different formats recognized by IU relevant native help engines (hence the recent introduction of the proc support for native help engines via the help/topic or help/keyword syntax).  And multilingual sensitivity driven by $language is easy to achieve for either file driven messages or help documentation, should it be necessary.

 

 

Naming Considerations

The latest version of Uniface Naming Conventions suggests that message names optionally should be prefixed with a seven character related object name (tying it to the system, application model, entity, and template), followed by a meaningful name, sequence number, and suffix, where the suffix types the message (for example, _L might indicate a field label). 

 

Should the use of messages be limited to providing for field labels and tool tips, however, except when used for default trigger code (when Uniface does not leave naming conventions up to the developer anyway), a system code should prefix the name, and the meaningful part of the name should name a field or panel button.  The use of suffixes becomes marginal when the use of messages is limited in this fashion.  Sequence numbers could be used for tie breakers (necessary since field names of up to 32 characters do not always provide for distinct message names which can have at most 16 characters -- including any prefix, suffix, and sequence number).

 

 

—————————————

See also Default Trigger Code

Include Proc

Include procs are pieces of Uniface code that are incorporated into a compiled Uniface object in place of #include directives that reference them.  They can be code snippets or complete modules.  Unlike constants, which can be imbedded in the middle of any proc statement (even in the middle of a quoted string), a #include statement cannot lie in the middle of another statement.

 

 

Library Search Order

Uniface substitutes the include proc for the #include compiler directive at compile time.  An include proc may in turn reference other include procs which are then recursively substituted.  Constants are substituted for after include procs, so include proc code can reference constants freely.

 

When compiling a component and encountering a #include reference that does not explicitly indicate the library name, Uniface looks first for the include proc in the component library (defined in component properties).  If not found there, Uniface then looks in SYSTEM_LIBRARY.

 

Similarly when compiling a global proc or menu, Uniface first looks for the include proc in the same library as the one for which global proc or menu is defined, then if not found looks in SYSTEM_LIBRARY.

 

In searching through the various libraries, note that it does not matter what default library is named in the UDE assignment file using the $variation setting.  And since include procs are substituted in at compile time, the runtime library, the runtime value of $variation, and the application startup shell library do not affect the library search order at all.

 

The library search order is of course overridden when the #include statement explicitly names the library where the include proc is located.

 

 

Appropriate Usage

Include procs can be used in a variety of ways.  Although each compiled Uniface object contains the full compiled include proc, the actual source for the proc code remains centralized (which from the application maintenance perspective is as or perhaps more important than runtime inheritance).  We need to consider both where the use of include procs is appropriate and how best to use them.

 

Part I:  Where to use include procs.

 

Compiled components and operations defined on them centralize source and have runtime in addition to compile time inheritance.  Moreover, operations are much more consistent with component based development (CBD), and are available to both Uniface and non-Uniface components alike.  So, component operations should be used in favor of include procs whenever possible because they support a much more robust deployment of a particular piece of functionality. 

 

Unfortunately, several important types of proc code may not be implemented or should not be centralized as an operation in a Uniface component, including:

 

§         Proc code that governs some element of form behavior.  Code that deals with form behavior very often needs to use Uniface functions and proc statements that are not allowed in a self-contained service or report (e.g., field_video) .  

             

            Although such procs could sometimes be centralized in a hidden form operation, this approach lacks robustness since it would require that all forms ever needing access to a specific and meaningfully named operation would have to be running non-modally (a modal form cannot activate any operation other than the exec operation on another form).  Requiring that all forms are non-modal is not realistic because all forms on the web are modal, any testing that occurs within the UDE is modal (as opposed to using the /tst command line switch), and most importantly sometimes particular forms in the user interface ought to be modal to prevent confusion (which not even /tst gets around).  (Note:  this bullet point is not mutually exclusive from the next two bullet points -- read on).

             

§         Proc code that is useful in a large number of components but must run in the context of the current component.  By its very nature, proc code that fits into this category cannot be centralized in a component operation defined outside the component which to use it. 

             

            For example, suppose every single write and delete performed by an application needed to create one or more rows of data in some kind of audit trailing table or tables.  An actual audit trailing service could be created that centralized all updates to that table.   But further suppose that which fields have been changed and other information local to the current component significantly affected what exactly was passed to the audit trail service operation or operations -- this code might be non-trivial and more than a few lines.  It cannot be shoved into the service operation, because the service would not have access to needed information about the data in the current component.  Even if all the “hard core” code finds its way into the one or more service operations, the significant amount of code responsible for packaging up the parameters passed to an operation must be centralized using a different strategy. 

             

            Another example is a proc code that defines an operation, for instance, a store operation.  To support a given architectural approach, it might be beneficial to have a standard store operation implemented in every single component able to store data.  Clearly, one cannot use a different service to implement an actual operation on the current component.  Include procs offer an implementation strategy for both examples, although not necessarily the best or only one.

             

§         Proc code that is useful in a large number of components but must be compiled in the context of the current component cannot be isolated to an operation in a specific component either. 

             

            As an illustration, suppose that a template init operation performs a lookup on initial security information based on the component, the user, and a bind entity.  There may be several entities painted on a given component and the bind entity  must be indicated using a specific notation, which resembles the notation used to express constants in proc code.  The actual entity name is substituted at compile time, just as is the case with a constant.  Any centralization of this segment of the init operation can only pick up the bind entity at compile time, and the bind entity depends on the current component only.

             

            In this case, the algorithm itself could be implemented as an operation in a service (not the current component),  with the bind entity or bind field passed as an explicit operation parameter.  That is, the circumstances in this bullet point can sometimes be avoided (whereas those discussed in the previous bullet point usually cannot).

             

            For another example, suppose a piece of code can exploit precompiler directives to very good result -- what results they yield will be affected nature of the current component only at compile time

             

            As a last example, blockdata can be #included in only the component and actual trigger where used.  Blockdata does not find much use in contemporary Uniface 7 development, however.

             

§         Proc code that does not exhibit good performance when accessed as a service operation.  There is acknowledgement in the Uniface Web App Server training manual that services do not perform as well as entry procs (which may be global or local).  If a particular piece of code had to implemented more efficiently, a global proc (assuming that no self-contained services or reports needed to access it) or an included entry proc could be used to implement the offending module.  The included entry proc is the more versatile choice.

             

            If a global proc had superior performance but the same module was needed in a service or report, an include proc could be written and included within a global proc of the same name -- thus efficiently making the module available to forms as a global  proc -- and the same include proc could be #included in services and reports.  There remains only one source proc, with a trivial global proc wrapper.  This approach promotes centralization and runtime efficiency while allowing all component types to reference a specific routine optimally.

 

The fact that the above types of proc code cannot be centralized well into a service operation  immediately leads to the question -- what is a good way to centralize?  Since this entire discussion must somehow tie into include procs (since that is the library object type we’ve been talking about), it should come as no great surprise that include procs can centralize proc code in the above cases where service operations cannot, and only in those cases should they be used.  But, are include procs the really the best vehicle in all these cases?

 

In the case of form specific procs, global procs could be used (and not simply as a wrapper to the include proc).  There is little advantage to centralizing source proc code in a global proc, other than it prevents the need for one object (an include proc) to be wrapped by another (a global proc of the same name that is a single #include statement) in order to gain the same efficiency available with global procs without increasing the compiled object size of forms.  Still, should the routine ever require introduction into a service or report, an include proc would pose no problem -- a global proc does.  Indeed, despite their latest reprise in the Web App Server course book globals are considered by many to be increasingly obsolete so ultimately to centralize using include procs over global procs would seem a good idea.

 

Generally considering all component types, centralization of the actual source might also occur through the application model or a component template, but the source scoping does not lend itself to routines that might be relevant to any number of templates or any number of entities.  Additionally, inheritance losses on the template or model triggers might lead to many exact and/or inadvertently edited copies of a proc module (for example, because the component local proc trigger contains multiple entry procs inherited from the template, where one of the entry procs is modified).  While there are proc modules which should be included in templates because they must be tailored to a given component created from the template, the premise here is that we have some piece of functionality that is referenced in many components and should not require tailoring.  Should the application developer choose to reference the include procs or perform activities before or after is another matter, but the routine itself is assumed to be truly modular.

 

It should also be noted that there is no product supported mechanism to centralize operations (available in more components than those based on a single template), other than by the use of include procs.

 

Part Two:  How to use include procs.

 

Given that an include proc implements some general use piece of functionality, it can be argued that the include proc should not only be a complete proc module, but in fact a callable entry proc.

 

If the include proc were not a complete proc module, it could be #included in the middle of other proc modules.  The include proc would not be able to have its own distinct parameters or even working variables, however, so it would not be able to centralize anything of much complexity.

 

If the include proc were a complete proc module which might or might not declare working variables, code could not easily be inserted before or after it if it were simply included into a trigger (i.e., if it were not an entry proc).


However, if the include proc were a complete entry proc, it could be defined once in a component, but would be callable from anywhere in the component.  Because it would be a black box with defined parameters developers need not worry much about its implementation internals, and could easily perform actions before and after the call.  An entry proc module can be included into a global proc as well, which essentially ignores the initial entry statement.

 

A problem arises, however.  Include procs might reference other include procs, so it does not seem clear how a developer could simply determine which procs to #include within a component.  Even if the developer rigorously analyzed the code, recursively following calls to other include procs, a change to the include proc that introduced a new reference would invalidate any earlier analysis.  Further, general purpose include procs might find use in application model code (entity or field triggers).  If include statements were made in the application model and multiple entities on a single component inherited a #include statement that pointed to the same proc, the component would not even compile because entry procs in a component must have unique names.

 

Ultimately, the way out of these difficulties is that the #include statement for all procs that implement general purpose algorithms is to actually put in another include proc which functions as an include library.  It would have a set of include statements for all necessary include procs relevant to a particular type of component (form components able to access general routines via global proc wrappers would not even require this much for those routines).  If component templates are used, an include library can be defined for each component which centrally would maintain all necessary #include statements and component developers would not need to worry about the issue further.  In fact, include libraries defined in this fashion can exploit the recursive nature of Uniface include procs -- they can reference other include libraries.

 

In the case of common operations (an extremely beneficial concept that will not be rationalized here), the concept of an include library is equally utilitarian, though the reasoning has to be adjusted slightly.  For example, an include operation would always have to be a complete proc module so the arguments to that effect follow even more quickly.  Obviously, it would not be an entry proc but an operation.  A single #include that connects to an include library of operations allows common operations to be centrally maintained on all components based on a certain template, say (the exact granularity would depend on specific implementation strategy choices).  In the case of operations (not #included local entry procs), so long as the #include statement for common operations existed in the component template operation trigger, all forms based on it would receive the operation (even if developers accidentally cut the #include reference).  But as with include procs, an operation should not be entirely #included unless it is completely unavailable for customization -- prototype operations that need tailoring would more likely be coded in the template operation trigger then #included so that they could be modified for a particular component.

 

The include library in a given template could mix operations and entry, or there could be two separate include libraries for a particular component/component template (no matter what, any include library which directly or indirectly includes an operation would have to go in the operations trigger).

 

Blockdata is a rarely used construct, and likely even more rare would be blockdata worth centralizing.  It is recommended that any blockdata worth centralizing merits a service that centralizes it (since it should always be possible to structure the blockdata in such a fashion that it does not depend on the current component context).

 

To Summarize

 

Include procs should be used to centralize common operations and general purpose algorithms not amenable to encapsulation in service operations.  Algorithms should be made accessible as an entry proc, which is made available to a component along with all other relevant include procs via an include proc library or libraries (and the #include for the include library would come from the template on which the component is based).

 

Whether or not to use global wrappers on include entry procs is a judgment call.

 

Other uses of include procs, e.g., partial proc modules, blockdata, containing source for multiple proc modules, etc., are dubious and should be carefully considered on a case by case basis.

 

 

Naming Considerations

The latest version of Uniface Naming Conventions suggests that include procs which contain an entry proc or operation reflect the module they contain.  A system prefix is considered optional, but the suggestion that the include proc name mirror the contents is sound and implicitly would result in a system prefix appearing in the include proc name when the include proc module entry proc or operation name includes a system prefix (i.e., just when it should have a system prefix).

 

On the other hand, include libraries reference multiple other include procs and cannot take the name of any of them.  To make proc code more readable, a system prefix is recommended (and is in fact essential if a single library is shared among multiple projects in order to partition the procs by name), followed by a meaningful name that suggests the purpose of this include library and the fact that it is an include library.

 

It would be nice if the naming convention for include procs and operations (which is then inherited by an include proc) could serve to differentiate between operations and entry procs.  However, since an entry proc label or operation name has the same maximum length as an include proc name, adding a suffix or another prefix to the name for that purpose is not workable, especially because entry procs and operations often need all the characters available to them to construct a meaningful name.  Therefore, it is suggested that include libraries relate to operations only or entry procs only except possibly the highest level include proc (one that would be #included into a component or template), which should contain only other #include statements.

Menu / Menubar / Pop-up Menus

Menu objects give GUI users a familiar interface to either normal structure editor functions (like Quit) or specific functionality appropriate to an application or a given form.  Further, popup or pulldown options can be disabled or hidden dynamically.

 

 

Library Search Order

Uniface looks for menu objects only in the library specified by $variation (whether set in the assignment file or by assigning the Uniface function).  If not found, Uniface then looks in the component library.  Note that any pulldown menus composing a menubar are searched for individually in the current library named by $variation and then the component library.  So, it is possible that the menubar selected at runtime comes from the form library but some of its pulldowns come from $variation instead.

 

 

Appropriate Usage

Menus serve to provide a familiar GUI user interface.  They can facilitate full user interaction with applications without requiring that they remember keystrokes, and provide an alternative, low space mechanism to command buttons and panels for activities of any type (although they can be used in combination with buttons and panels).  Whether or not to use menu objects and in what circumstances is largely an interface design issue.

 

Menu objects are not currently supported for Web Application Server.

 

 

Naming Considerations

The latest version of Uniface Naming Conventions suggests that menu object names optionally should be prefixed with a seven character related object name (tying it to the system, application model, entity, and template), followed by a meaningful name.  A menubar would also have an _B suffix.  But because menu objects need not relate to any one template, form, or even application, a system prefix probably is least problematic.

Panel

Panels are application level or form level toolbars.  Each icon on the toolbar can correspond to a Uniface structure editor function or can tie to a form operation (so long as the operation has no arguments).

 

 

Library Search Order

Except for the USYS library off limit to developers, Uniface looks for panels only in the library specified by $variation (whether set in the assignment file or by assigning the Uniface function).

 

 

Appropriate Usage

Panels serve to provide a familiar GUI user interface.  Panels can facilitate full user interaction with applications without requiring that they remember keystrokes, and provide an alternative, low space mechanism to command buttons or pulldown menus for activities of any type (although they can be used in combination with buttons and pulldown menus).  One important thing to keep in mind:  panels and their icons cannot be dynamically changed in the same way that menus and command buttons can (for example, menu options can be dynamically disabled or hidden), restricting their use somewhat.

 

Except for that, whether or not to use panels and in what circumstances is largely an interface design issue.  The latest version of the Uniface GUI Guide does recommend, however, that applications using panels avoid too many very specific panel icons and instead standardize on a limited set of panel buttons that have meaning throughout an application.

 

Panels are not currently supported for Web Application Server (although this might change in the future).

 

 

Naming Considerations

The latest version of Uniface Naming Conventions suggests that panel names optionally should be prefixed with a seven character related object name (tying it to the system, application model, entity, and template), but a panel might or might not relate to a single, particular object.  If the GUI Guide suggestion that panel icons be kept to a recognizable few, then many templates, forms, and even applications could point to the same panel.  A system prefix is recommended in any case.

 

 

 

Using a Single Library

 

It is possible that all developers share a single library.  Of course, the library does not then assist in partitioning the data, so strict naming conventions must force a prefix to be used in each object name to accomplish that task.  This is a workable approach and certainly the most straightforward (to document and for developers to understand), but it has a few downsides:

 

q       Checking in and out objects by library becomes tantamount to checking in and out all objects of a particular type.  It would remain possible to check in or check out individual objects by name, except in the case of constants (which cannot be version controlled individually either way).  Version control of constants would lose all granularity in the single library approach.

             

q       Performing builds on multiple applications from different development environments would become much more difficult, even if naming conventions were strictly adhered to, because if would not be possible to import or get copies of constants without first blowing away constants from all other areas.  Without developing a tool to assist, the merging becomes a manual task.  Even with a tool (depending on the version control system and how much maintenance IU is willing to spend on a productivity tool to aid this process), the process would almost certainly remain very manual.  Even if the current plan is to have one development environment, this and other considerations reduce flexibility in moving to multiple development environments if and when the need arises.

 

q       A single library scheme is unable to exploit the Uniface library search order to achieve a kind of polymorphism, where a general purpose object is used if and only if a more specific object of the same name has not been defined.  For example, suppose a general purpose include proc centralizes all application independent handling of asynchronous processing (resulting from Windows interrupts that close the application window, postmessaging and sendmessaging, etc.)  Should a given application need to add a layer of handling on top of this for each and every component in the application, it would be possible to create an include proc of the same name that referenced the existing standard include entry proc but added application specific handling on top of that (there is some trickery involved).  Alternatively, an application could replace a general purpose include proc with one more to its liking, allowing a general framework to implement a rich set of functionality appropriate to the majority of conceivable applications without encumbering other possible applications.   This potentially is very useful, hence the relative disadvantage for a single library approach.

 

Should a single library approach be taken, SYSTEM_LIBRARY would seem to be the most natural library to use since it is on the default search path in many cases.  In all other cases, setting $variation to SYSTEM_LIBRARY would then be sufficient to guarantee that objects are found.  Should a single library approach later give way to a multiple library approach, separation of library objects does not become any more difficult, and perhaps even easier because it is likely that general purpose objects would still end up in SYSTEM_LIBRARY.

 

 

Using Individual Team Libraries

 

A more powerful approach that avoids the disadvantages of the single library approach would be to exploit as much as possible the Uniface search path by having all common objects in SYSTEM_LIBRARY, while having a separate library for each team with objects that are project specific.

 

Whether or not teams shared a development environment, each could have a distinct assignment file with $variation naming the team library.  Component objects belonging to the team would have their component library set to the team library as well.  This would allow all compile time objects to be referenced appropriately, whether team specific or not, and could support  polymorphism of centralized routines.

 

Unfortunately, this approach prevents the sharing of menu objects, panels, messages and help text, and glyphs.  (The way around this is to use the “forbidden” USYS library to centralize shared objects here, but this topic will not be explored.)  Still, SYSTEM_LIBRARY could contain “duplicate” templates of these objects.  Truly generic panels and menubars are few, so little re-use potential is lost.  The much more important code centralization with potential for polymorphism is maintained, including calls found in menu triggers that dynamically determine what to disable or hide (an important consideration to the extent that some generic aspects of security might be centralized in include procs).

 

Applications so written would be able to run out of a single or separate startup shells, just as those developed using a single library scheme.

 

 

Other Library Schemes

 

While variations are possible on the single library scheme and the team library scheme, the library search order used for most library objects has at most two levels, not counting the special Uniface library USYS.

 

That said, it is conceivable that additional libraries could be introduced for objects where the library can be explicitly indicated.  For example, the #include statement allows a library to be named.  Therefore, different types of include procs could be put into different libraries for organizational purposes.

 

Only when clear and specific benefits obtain, however, is a more complicated library structure worth thinking about.  In particular, library conventions that require application developers to perform proc code assignments to $variation in order to address library objects ought to be avoided because these assignments are global to the session.  Forgetting to initialize the value each time when referencing a library object whose search path depends on $variation or forgetting to reset the library value can impact the runtime behavior of other objects.  Good, defensive coding proves elusive.