Chapter 5: Custom Code and Integration Adaptation¶
Phase 4 prepared the infrastructure -- database connected, properties migrated, operator deployed and verified. This chapter delivers the procedures to repackage custom Java code into MREF-compatible deployment paths, migrate integration authentication from legacy credentials to MAS-managed secrets, and verify workflow carryover in the pod-based agent model. Three workstreams, each building on Phase 2's inventory templates.
Adaptation Task Sequence¶
Complete these tasks in order. Each references a detailed procedure section below.
- [ ] 1. Repackage ClassLoader record JARs (Section 1.1)
- [ ] 2. Build and deploy Customization Archives (Section 1.2)
- [ ] 3. Handle hybrid JARs requiring both paths (Section 1.3)
- [ ] 4. Migrate API key authentication (Section 2.1)
- [ ] 5. Configure OIDC authentication (Section 2.2)
- [ ] 6. Complete AES keystore post-extraction configuration (Section 2.3) -- requires Phase 4 Section 1.5 complete
- [ ] 7. Update endpoint URLs, certificates, and network policies (Section 2.4)
- [ ] 8. Verify AppPoints licensing for service accounts (Section 2.5)
- [ ] 9. Verify workflow carryover with risk-based sampling (Section 3)
Warning
Step 6 requires the AES keystore extraction from Phase 4 Section 1.5. If you skipped that step, stop here and complete it first -- AES-encrypted integration passwords are unrecoverable without the extracted keystore.
1. Custom Java Repackaging¶
Input
Your Phase 2 custom Java inventory template (Section 3). Each entry is already classified as ClassLoader Record or Customization Archive candidate. Work through them in that order -- ClassLoader records first (no downtime), Customization Archives second (requires pod rebuild).
1.1 ClassLoader Record Upload¶
ClassLoader records store custom workflow task JARs in the TRIRIGA database and deploy them across all pods without a container rebuild. This is the preferred path for any custom Java that implements the CustomTask or CustomBusinessConnectTask interface.
Pre-check
- [ ] Phase 2 inventory template has at least one entry classified as "ClassLoader Record"
- [ ] JAR compiled against
TririgaCustomTask.jarand/orTririgaBusinessConnect.jar(do NOT bundle these IBM JARs in the upload) - [ ] All classes use approved package prefixes:
com.tririga.ps.*,com.tririga.appdev.*, orcom.tririga.custom.* - [ ] Classes implement
com.tririga.pub.workflow.CustomTaskorcom.tririga.pub.workflow.CustomBusinessConnectTask
Steps:
- Navigate to Tools > System Setup > System > Class Loader Verified1
- Click Add to create a new ClassLoader record
- Enter a unique name -- this name is referenced in workflow task configurations
- Select ClassLoader Type:
- Parent First (default, safest): JVM checks parent classloader before custom classes. Use this unless you have a specific reason not to.
- Parent Last: Custom classes checked first. Use when intentionally overriding platform classes.
- Isolated: Custom classes loaded in isolation. Use when avoiding conflicts with other ClassLoader records.
- In the Resource Files section, click Add to create a new Resource File record
- Upload the compiled
.jarfile - Save the record -- this commits the JAR as a database BLOB and triggers a revision increment Verified2
Tip
Saving a modified ClassLoader record increments the revision number, triggering automatic reload across all pods without restart. This is hot-deployment -- no downtime, no pod restart. Development Mode file system drops are an anti-pattern in MREF; do not use them. Verified3
Workflow Invocation¶
In the Custom task's Class Name field, use the syntax:
The portion before the colon must match the ClassLoader record name exactly (case-sensitive). The portion after is the fully qualified class name within the uploaded JAR. Verified3
Verify
- [ ] Navigate to Admin Console > Class Loader Info -- confirm the ClassLoader appears with the current revision number
- [ ] Trigger a test workflow that invokes the custom task
- [ ] Check the server log (workflow agent pod) for successful class loading -- absence of
ClassNotFoundExceptionconfirms success
Rollback
Delete the ClassLoader record (or revert to previous revision) -- revision decrement triggers reload across all pods.
Gotcha
Classes outside the com.tririga.ps, com.tririga.appdev, or com.tririga.custom namespace prefixes are silently blocked from loading. If your JAR uses a different package root, the class will not load and workflows will fail with ClassNotFoundException at runtime. Repackage the classes under an approved prefix before uploading. Verified3
1.2 Customization Archive Build and Deploy¶
Customization Archives are ZIP files that the MREF operator overlays onto the base container image during a rebuild. Use this path for servlet classes, web.xml modifications, 3rd party library dependencies, JSP files, and database configuration scripts -- anything that requires container-level integration beyond the ClassLoader mechanism.
Pre-check
- [ ] Phase 2 inventory template has at least one entry classified as "Customization Archive"
- [ ] All dependencies identified (3rd party JARs must be explicitly included -- no Maven/Gradle resolution at build time) Likely20
- [ ] HTTP/HTTPS/FTP/S3 hosting endpoint available for the ZIP file
Steps:
- Create the ZIP directory structure mirroring
/SMP/maximo/:
customization_archive.zip
applications/
maximo/
businessobjects/
classes/ # Custom business object classes, field validations
common/
webclasses/ # Custom servlet classes
lib/ # 3rd party dependency JARs
maximouiweb/
webmodule/
webclient/
components/ # Custom JSP files, UI components
deployment/
was-liberty-default/
config-deployment-descriptors/
maximo-mea/
meaweb/
webmodule/
WEB-INF/
web.xml # Servlet mappings, filter config
maximo-ui/
meauiwebmodule/
WEB-INF/
web.xml # UI web module modifications
tools/
maximo/
en/
cust/ # Database scripts (.dbc files)
- Place compiled
.classfiles, JARs, and configuration files into the appropriate directories - Include ALL dependency JARs in
applications/maximo/lib/-- the operator build process does not resolve external dependencies Likely20 - Package the directory structure into a ZIP file
- Host the ZIP on an accessible endpoint (e.g., NGINX pod in the OpenShift cluster, cloud object storage like S3, or an internal HTTP server)
- Configure via one of two methods:
- MAS Admin UI: Catalog > Manage > Actions > Update Configuration > Customization section -- enter the archive name and URL
- ManageWorkspace CR YAML: Add
customizationArchiveNameandcustomizationArchiveUrlto thecustomizationListfield [Needs Validation -- MREF may use FacilitiesWorkspace CR instead of ManageWorkspace CR] - Apply changes -- operator reconciliation triggers a new Docker image build
Warning
Applying a Customization Archive triggers a full container image rebuild and pod restart. This causes downtime. Schedule archive deployments during maintenance windows. Verified4
Verify
- [ ] Monitor operator pod logs for successful archive download and image build
- [ ] Confirm all server pods restart with the new image
- [ ] For servlets: test the servlet URL path and verify HTTP 200 response
- [ ] For business object classes: trigger a workflow or form action that invokes the custom class
- [ ] Check pod logs for
ClassNotFoundExceptionorNoClassDefFoundError
Rollback
Remove the archive entry from ManageWorkspace CR (or clear the Customization section in MAS Admin UI). Apply changes to trigger a rebuild without the customization. Pods restart with the base image.
1.3 Hybrid JAR Worked Example¶
Some JARs contain both workflow task classes (ClassLoader path) and servlet/web.xml components (Customization Archive path). Phase 2 flagged these as "Hybrid" in the Risk column. The procedure is to split the JAR into two deployment artifacts.
Scenario: A JAR called tririga-custom-utils.jar containing:
| Class | Type | Deployment Path |
|---|---|---|
com.tririga.custom.tasks.DataSyncTask |
Implements CustomTask -- workflow task |
ClassLoader |
com.tririga.custom.tasks.ValidationHelper |
Utility used by DataSyncTask |
ClassLoader |
com.tririga.custom.servlets.ReportExportServlet |
HTTP servlet | Customization Archive |
com.tririga.custom.servlets.ReportExportFilter |
Servlet filter | Customization Archive |
Dependencies: commons-csv-1.10.jar, json-simple-1.1.1.jar
Steps:
-
Identify the split. Review each class in the JAR.
DataSyncTaskandValidationHelperare workflow task classes invoked by the TRIRIGA workflow engine -- these go through the ClassLoader path.ReportExportServletandReportExportFilterrequire servlet container integration (web.xml mappings) -- these go through the Customization Archive path. -
Build the ClassLoader JAR. Create
tririga-custom-tasks.jarcontaining only thecom.tririga.custom.tasks.*package. Compile againstTririgaCustomTask.jar. Do NOT include the servlet classes or 3rd party JARs -- the ClassLoader JAR should be self-contained for workflow invocation. -
Upload the ClassLoader JAR. Follow the Section 1.1 procedure:
- ClassLoader name:
CustomTaskLoader -
Workflow invocation syntax:
CustomTaskLoader:com.tririga.custom.tasks.DataSyncTask -
Build the Customization Archive. Create the ZIP structure:
customization_archive.zip
applications/
maximo/
common/
webclasses/
com/tririga/custom/servlets/
ReportExportServlet.class
ReportExportFilter.class
lib/
commons-csv-1.10.jar
json-simple-1.1.1.jar
deployment/
was-liberty-default/
config-deployment-descriptors/
maximo-mea/
meaweb/
webmodule/
WEB-INF/
web.xml # Contains servlet and filter mappings
-
Deploy the Customization Archive. Follow the Section 1.2 procedure.
-
Verify both paths independently.
- Trigger a test workflow that invokes
DataSyncTask-- verify it completes via ClassLoader (no pod restart required) - Navigate to the servlet URL for
ReportExportServlet-- verify HTTP 200 response via Customization Archive (after pod restart)
Tip
If the workflow task classes also depend on the 3rd party JARs (e.g., DataSyncTask uses commons-csv), include those JARs in both the ClassLoader JAR upload and the Customization Archive. ClassLoader and Customization Archive class paths are isolated from each other. Likely21
Key Takeaway
ClassLoader records are hot-deployable from the database -- no downtime, no pod restart. Customization Archives require a full container rebuild and cause downtime. Always prefer the ClassLoader path where possible. The hybrid split is the most complex scenario; if your Phase 2 inventory flagged hybrid JARs, budget extra time for splitting and testing both paths independently.
2. Integration Adaptation¶
Input
Your Phase 2 integration inventory template (Section 4). Each integration is already grouped by authentication type (API Key, OIDC, AES Keystore). Work through each group using the corresponding sub-section below.
2.1 API Key Migration (HTTP/SOAP)¶
Pre-check
- [ ] Phase 2 integration inventory has entries with Auth Type = "API Key"
- [ ] MREF deployment is running and accessible via MAS Admin UI
- [ ] External system administrators are available to receive new API keys
Steps:
- Navigate to MAS Admin > Integration > API Keys Verified5
- Click Add to generate a new API key
- Select the integration service account (e.g.,
MXINTADMor the account listed in your Phase 2 template) - Generate the key and securely store it (password manager, not plaintext)
- Provide the new API key to external system administrators
- External callers update their configuration to pass the API key in the
x-api-keyHTTP header Verified6 - For each Integration Object associated with this service account, execute the ReMap action: open the Integration Object record, select Actions > ReMap. This updates internal ID mappings that may have shifted during OM migration. Verified7
Gotcha
The ReMap action is required after every Integration Object OM migration. Without it, field mappings reference stale internal IDs, causing cryptic "field not found" or null mapping errors despite correct field names. Verified22
Verify
- [ ] Send a test request from the external system using the new API key
- [ ] Confirm HTTP 200 response (not 401 Unauthorized)
- [ ] Check integration processing logs for successful data exchange
Rollback
Revoke the API key in MAS Admin > Integration > API Keys. Revert external system to previous authentication method.
2.2 OIDC Configuration (REST)¶
Pre-check
- [ ] Phase 2 integration inventory has entries with Auth Type = "OIDC"
- [ ] OIDC Provider (e.g., Okta, Azure AD) is accessible
- [ ] OpenShift cluster has
kubectl/ocaccess for secret creation
Steps:
- Register the application with your OIDC Provider
- Configure sign-in redirect URIs for the MREF endpoint
- Create Kubernetes secret
tas-oidc-secret.yamlcontaining: clientId-- from OIDC Provider app registrationclientSecret-- from OIDC Provider app registration- Discovery endpoint URL (if provider supports it)
- If no discovery endpoint, manually declare:
issuerIdentifier,tokenEndpointUrl,jwkEndpointUrl,authorizationEndpointUrl,userIdentityToCreateSubject - Import the OIDC provider's TLS certificate into
tas-truststore.yamlsecret (PEM format, ASCII Base64 encoded X.509) - Apply both secrets to the cluster:
oc apply -f tas-oidc-secret.yamlandoc apply -f tas-truststore.yaml - Scale TRIRIGA Controller Manager deployment to 0, then back to 1 to force pod restart with new OIDC config:
Verify
- [ ] Attempt SSO login or REST API call through the OIDC flow
- [ ] Confirm token exchange succeeds (check pod logs for OIDC-related messages)
- [ ] Verify the Integration Object processes requests with OIDC-authenticated identity
Rollback
Delete the OIDC secrets: oc delete secret tas-oidc-secret tas-truststore -n <namespace>. Scale Controller Manager to trigger restart without OIDC config. Revert to pre-migration authentication method.
2.3 AES Keystore Post-Extraction¶
Warning
This section assumes you have completed Phase 4 Section 1.5 (AES Encryption Secret Extraction). If you have not, stop here and complete it first. Without the extracted keystore, AES-encrypted integration passwords are permanently unreadable after MREF activation.
Pre-check
- [ ] Phase 4 Section 1.5 AES extraction is complete
- [ ]
vault_secretOpenShift Secret exists inmas-<instanceId>-facilitiesnamespace withpasswordkey Verified8 - [ ] Phase 2 integration inventory has entries with "AES Encrypted? = Yes"
Steps:
- Verify
vault_secretis accessible from MREF pods: (Should return non-empty output confirming the secret exists and is readable) - For each Integration Object with AES-encrypted credentials (from Phase 2 inventory): a. Open the Integration Object record in TRIRIGA b. Execute Actions > ReMap to update internal ID mappings Verified7 c. Test the integration connection -- the decryption should use the vault_secret automatically
- Document verification results per integration in your Phase 2 template (Result column)
Verify
- [ ] Every AES-encrypted integration in Phase 2 inventory has been tested
- [ ] No "decryption failed" or "keystore not found" errors in pod logs
- [ ] Integration data flows end-to-end (not just authentication)
Rollback
AES extraction cannot be undone. If connections fail, verify the vault_secret contents match the original TRIRIGA_AES_SECRET. Re-extract from the TRIRIGA database if necessary using the Phase 4 procedure.
2.4 Beyond Authentication¶
Some integrations require changes beyond authentication. These are the three most common patterns.
Endpoint URL Updates. MREF uses OpenShift route URLs instead of legacy hostnames. Format: https://<workspace_id>-tririga.<mas_domain>/oslc/... Likely9. For each integration, update the endpoint URL in the Integration Object record. External consumers of TRIRIGA APIs must update their stored URLs.
Gotcha
External systems caching OSLC ETags from the old TRIRIGA environment will get HTTP 412 Precondition Failed errors. All OSLC consumer applications must perform fresh GET requests to acquire new ETags after migration. Verified10
Certificate Trust. For outbound HTTPS integrations to partner systems, import external CA certificates:
- Obtain the external system's CA certificate in PEM format
- Import via MAS Admin UI: Suite Administration > Workspace > Actions > Update Configuration > Import Certificates
- OR create/update
tas-truststore.yamlKubernetes secret with the certificate - This triggers a container image rebuild (same as Customization Archive deployment -- schedule during maintenance) Verified11
Network Policy Adjustments. OpenShift network policies may block integration traffic that flowed freely in the VM environment. Check:
- Egress policies: can MREF pods reach external integration endpoints?
- Ingress policies: can external systems reach MREF routes?
- If blocked, work with the OpenShift administrator to add appropriate NetworkPolicy resources
Tip
Test network connectivity before testing authentication. A blocked network policy produces timeout errors that look like authentication failures.
2.5 AppPoints Licensing Checkpoint¶
After migrating each service account's authentication, verify its AppPoints consumption. Over-provisioned service accounts waste AppPoints budget.
As a quick reference, the AppPoints tiers (detailed in Phase 2 Section 4):
- Self-Service: 0 AppPoints
- Install: 1 AppPoint
- Limited: 5 AppPoints
- Base: 10 AppPoints
- Premium: 15 AppPoints
Steps:
- For each integration service account in your Phase 2 inventory: a. Check which security groups the account belongs to b. Identify the highest-tier application accessible via those groups -- that determines the AppPoints tier Verified12 c. Ask: does this service account actually need access to that application?
- To reduce consumption: create restrictive security groups granting only the BOs and APIs the integration actually uses Verified23
- Update the Phase 2 template with the verified tier for each service account
Tip
AppPoints are consumed concurrently -- while the session is active, not per transaction. A service account that authenticates once and holds a session open consumes AppPoints continuously until the session ends. Design integrations to authenticate, execute, and disconnect. Verified13
Key Takeaway
Integration authentication migration is a prerequisite -- not a post-migration cleanup. AES secrets must exist before activation (hard gate), API keys and OIDC must be configured before external systems can reconnect, and every migrated Integration Object needs a ReMap action to fix stale ID mappings. Test each integration end-to-end, not just authentication.
3. Workflow Verification¶
Input
Your Phase 2 standard artifact inventory (Section 2) filtered to workflows, plus the custom Java inventory for any workflow-triggered custom tasks. You do not need to rebuild or decompose any workflow -- they carry over intact. This section is about verification, not reconstruction.
3.1 Agent Primer -- Pods vs VMs¶
TRIRIGA workflows execute through agents -- background processes that poll the database for pending workflow tasks and execute them. The agent model changes fundamentally in MREF.
Legacy (VMs): Agents registered in AGENT_REGISTRY and AGENT_STARTUP tables, bound to specific server instances. Horizontal scaling meant adding more servers, each running its own set of agents. Agent assignment was manual -- you decided which agents ran on which servers.
MREF (Pods): Agent tables are purged during Phase 4 database prep (Section 1.2). The MREF operator manages agent lifecycle entirely. Agent sizing is controlled via spec.env.size (small/medium/large) in the TRIRIGA CR, which determines CPU and memory allocation per pod. Agent pods scale vertically only -- more CPU/memory per pod, not more pods. Verified14
Dedicated workflow agents (dwfagent): For isolating heavy processing (integration queues, bulk operations), configure dedicated workflow agents via spec.wfagents in the TRIRIGA CR (exact JSON format Needs Validation15). Assign specific users and security groups to dedicated agents via Admin Console > Workflow Agent Info. Verified
Agent timing behavior:
- WF_AGENT_SLEEPTIME: Built-in jitter to prevent database lock-stepping across agent pods. Agents do not wake simultaneously -- they stagger their polling intervals to reduce database contention. Verified16
- AGENT_STALE_TIME_IN_SECONDS: Configurable (default 60 seconds). Stale agents are automatically reclaimed by the operator. Verified24
Warning
Do not attempt horizontal agent scaling (multiple replicas). MREF agent pods are architecturally constrained to single instances. For load distribution, use dedicated workflow agents assigned to specific user groups. Verified17
3.2 Risk-Based Sampling Approach¶
Not every workflow needs exhaustive testing. Categorize your Phase 2 workflow inventory by risk and allocate verification effort accordingly.
| Risk Category | Examples | Verification Depth |
|---|---|---|
| Business-Critical | Lease calculations, space charge-backs, compliance workflows | Full end-to-end test with production-like data |
| High-Volume | Scheduled maintenance WOs, move management, reservation processing | Full test + performance timing comparison |
| Integration-Triggered | Inbound OSLC creates, outbound data sync, notification workflows | Full test with live integration endpoint |
| Standard | Form validations, approval routing, field auto-population | Spot-check sampling (20-30% of workflows in category) |
Start with business-critical workflows. If those pass, the platform is sound. High-volume and integration-triggered workflows test edge cases. Standard workflows are lowest risk -- spot-check to confirm, do not exhaustively test every one.
3.3 Verification Template¶
Pre-fill this template from your Phase 2 workflow inventory before starting verification. The example rows show the expected level of detail for each risk category.
| Workflow | Module | Risk Category | Test Steps | Expected Result | Actual Result | Status | Notes |
|---|---|---|---|---|---|---|---|
| Lease Payment Calculation | Real Estate | Business-Critical | Create test lease, trigger payment calc, verify output | Payment amounts match source system | |||
| Space Move Request | Workplace Services | High-Volume | Submit 10 concurrent move requests | All 10 processed within 5 minutes | |||
| OSLC Create Work Task | Facilities | Integration-Triggered | POST to /oslc/so/triWorkTask with API key | Work task created, 201 response | |||
| Form Field Validation | Custom Module | Standard | Enter invalid data in required fields | Validation message displayed |
Tip
Pre-fill this template from your Phase 2 workflow inventory before starting verification. Prioritize by risk category -- complete all Business-Critical rows before moving to High-Volume.
3.4 Known MREF Workflow Issues¶
These are the most common post-migration workflow issues. All are configuration fixes -- none require workflow redesign.
- WF_INSTANCE_SAVE performance bomb. Default or leftover legacy setting enables full workflow instance recording, causing massive database overhead. Set
WF_INSTANCE_SAVE=ERRORS_ONLYin production. For bulk data loads, useDATA_LOADwhich bypasses instance saving entirely. MREF enforces a 1,000 instance/day hard limit regardless. Verified18
Gotcha
If you notice database CPU spiking and pod memory climbing after migration, check WF_INSTANCE_SAVE first. Full instance recording on a production workload is the single most common MREF performance issue.
- Reserve SMTP agent endpoint. Starting with TAS 11.3, the operator no longer auto-creates the SMTP NodePort service for
reservesmtpagent. If your environment uses Exchange integration for inbound email workflows, you must manually expose the service using pod selectortas.ibm.com/smtp. Verified19
Gotcha
Inbound email workflows will silently fail -- no error message, just no email processing. If Exchange integration worked pre-migration and stops working post-migration, the missing SMTP NodePort is almost certainly the cause.
-
EXTERNAL_FRONT_END_SERVER update. Notification email links (workflow-generated emails with URLs back to TRIRIGA) must point to the new MREF route URL, not the legacy hostname. Update this property in the operator configuration. Verified24
-
Stale agent timing. Agent sleep intervals may behave differently in pods due to
WF_AGENT_SLEEPTIMEjitter. Workflows that depended on precise timing in the VM environment may fire slightly earlier or later. This is by design -- it prevents database lock-stepping. Verified25
3.5 Scheduled Workflows¶
Scheduled workflows are the most likely to behave differently post-migration because they depend on time-based triggers running within the agent infrastructure.
- The
scheduleragentruns in MREF's pod topology, sized according tospec.env.sizeVerified14 - Timezone handling: Pod timezone may default to UTC rather than the server timezone from the legacy environment. Check the
TZenvironment variable in the pod spec. If your scheduled workflows depend on local time, configure the timezone explicitly. Needs Validation (no public citation located) - Cron expression compatibility: TRIRIGA cron expressions carry over, but verify that the "Run Workflows Triggered By Scheduled Events As" field is configured correctly in Admin Console -- this determines which user identity executes scheduled workflows. Verified24
- Timing precision: Scheduled workflows fire within the agent's polling interval, not at the exact cron time. With
WF_AGENT_SLEEPTIMEjitter, expect +/- a few seconds of variance from the VM environment. This is normal.
Gotcha
If scheduled workflows suddenly stop firing after migration, check the scheduleragent pod status first (oc get pods | grep scheduler), then verify the "Run Workflows Triggered By Scheduled Events As" field. A blank field means no user identity is assigned and scheduled workflows will silently not execute.
Key Takeaway
Workflows carry over intact -- you are verifying, not rebuilding. Focus verification effort on business-critical and integration-triggered workflows. The most common post-migration issues are agent timing differences (by design), missing SMTP endpoints (manual step), and WF_INSTANCE_SAVE performance impact (configuration fix). None of these require workflow redesign.
Chapter Summary¶
Phase 5 completes the adaptation layer between your TRIRIGA customizations and the MREF platform. Custom Java code follows two paths: ClassLoader records (hot-deployable, no downtime) and Customization Archives (container rebuild, maintenance window required). Integration authentication migrates to platform-managed secrets -- API keys, OIDC, and AES keystores -- with the critical requirement that AES secrets must exist before MREF activation. Workflows carry over intact but require verification in the new pod-based agent model.
Key Takeaway
The three most common Phase 5 failures are: (1) forgetting the ReMap action on migrated Integration Objects, (2) skipping AES keystore extraction before activation, and (3) not configuring WF_INSTANCE_SAVE before production load. All three are preventable with the sequencing checklist at the top of this chapter. With custom code repackaged and integrations reconnected, proceed to Phase 6: Testing, Cutover, and Quality for comprehensive validation.
Sources¶
-
IBM Docs: Adding resource files to class loaders; IBM TRIRIGA Connector User Guide (PDF) ↩
-
IBM Docs: Adding resource files to class loaders; IBM TRIRIGA Connector User Guide (PDF); IBM Docs: Custom classes and custom tasks; IBM Docs: Overview of extended functions ↩
-
IBM TRIRIGA Connector User Guide (PDF); IBM Docs: Custom classes and custom tasks ↩↩↩
-
IBM Docs: Adding customizations; TRM Group: Customization Archive; IBM Docs: Migrating customizations using customization archives ↩
-
IBM MREF FAQ; IBM MREF vs TAS architecture comparison (PDF) ↩
-
IBM Docs: Adding trusted certificates (Maximo Manage); IBM MAS/TAS Connector ↩
-
Maximo Open Forum: AppPoints efficiency (no longer publicly accessible) ↩
-
IBM Partner Enablement deck (June 2025, PDF); MAS Moments: App Points in MAS (site unavailable) ↩
-
IBM Docs: Basic configuration; IBM MREF vs TAS architecture comparison (PDF) ↩↩
-
IBM MREF technical content (PDF) ↩
-
IBM MREF vs TAS architecture comparison (PDF) ↩
-
IBM TRIRIGA Release Notes for 10.5.1 and 3.5.1; IBM Docs: TRIRIGA tuning ↩
-
IBM Docs: Network considerations ↩
-
IBM Docs: Adding customizations; IBM Docs: Migrating customizations using customization archives ↩↩
-
IBM Docs: Custom classes and custom tasks; IBM Docs: Adding customizations ↩
-
IBM TRIRIGA Connector User Guide (PDF); IBM TAP Object Migration User Guide ↩
-
Maximo Open Forum: AppPoints efficiency (no longer publicly accessible); MAS Moments: App Points in MAS (site unavailable) ↩
-
IBM Docs: TRIRIGAWEB.properties; eCIFM: Early TRIRIGA-to-MREF Lessons ↩↩↩
-
IBM Docs: TRIRIGA tuning; IBM Docs: TRIRIGAWEB.properties ↩