ADOBE EXPERIENCE PLATFORM

Unlocking the Power of AEM Extension Manager: A Comprehensive Guide

Md Maroof Khan
Martin Altmann

Unlocking the Power of AEM Extension Manager

In the ever-evolving landscape of digital content management, Adobe Experience Manager (AEM) stands out as a robust solution for enterprises seeking to deliver personalized digital experiences. With the introduction of the AEM Extension Manager, Adobe has opened new horizons for developers and content authors alike, enabling them to extend and customize AEM's capabilities without diving deep into complex codebases.

In this comprehensive guide, we'll explore the AEM Extension Manager, delve into its features, and walk through practical examples of creating custom UI extensions, including an integration with OpenAI's DALL·E for image generation directly within AEM.

Table of Contents

  1. What is AEM Extension Manager?
  2. Why Use UI Extensions?
  3. Extension Points in AEM
  4. Creating Your First UI Extension
  5. Deep Dive: Integrating OpenAI's DALL·E 3 with AEM
  6. Testing and Deploying Extensions
  7. Best Practices and Considerations
  8. Conclusion

What is AEM Extension Manager?

The AEM Extension Manager is a powerful feature available in AEM as a Cloud Service that enables developers to create, manage, and deploy UI extensions without altering the core AEM codebase. These extensions are applications built using Adobe's UIX Extensibility SDK and deployed on Adobe App Builder, a serverless platform for building and deploying custom Adobe solutions.

Key Features:

Why Use UI Extensions?

UI Extensions provide a way to enhance the AEM user experience without the overhead of traditional development cycles. Here are some compelling reasons to use them:

Screenshots of demo projects

Extension Points in AEM

Extensions can be integrated at various points within AEM:

These extension points provide the flexibility to enhance different parts of the AEM interface, depending on your organization's needs.

Creating Your First UI Extension

You can find a step-by-step guide here. You can also jump directly to the github project to get your hands on the code.

Prerequisites:

Steps:

1. Set Up Your Development Environment:

npm install -g @adobe/aio-cli

2. Create a New Project:

bash aio app init my-aem-extension --template https://github.com/adobe/aem-cf-editor-extension-template

3. Understand the Project Structure:

4. Develop the Extension:

Frontend:

Backend:

5. Test Locally:

6. Deploy the Extension:

Deep Dive: Integrating OpenAI's DALL·E 3 with AEM

Let's walk through a practical example of creating an extension that generates images using OpenAI's DALL·E and integrates them into AEM content fragments.

Objective:

  1. Add a button in the Content Fragment Admin Console.
  2. On click, open a modal to input an image description.
  3. Generate an image using DALL·E based on the description.
  4. Upload the generated image to AEM Assets.
  5. Update the selected content fragment to include the new image.

Implementation:

1. Adding the Button to the Admin Console

In extension-registration.js:

import React from 'react';
import { register } from '@adobe/uix-guest';
import { Image } from '@adobe/react-spectrum-workflow';

register({
 extensionPoint: 'ContentFragment.ActionBar',
 id: 'generate-image-button',
 methods: {
 // Configure your Action Bar button here
 actionBar: {
 getButton() {
 return {
 'id': 'generate-image', // Unique ID for the button
 'label': 'Generate Image', // Button label
 'icon': 'PublishCheck' // Button icon
 }
 },

 // Click handler for the extension button
 onClick(selections) {
 // Collect the selected content fragment paths
 const selectionIds = selections.map(selection => selection.id);

 // Create a URL that maps to the
 const modalURL = "/index.html#" + generatePath(
 "/content-fragment/:selection/generate-image-modal",
 {
 // Set the :selection React route parameter to an encoded, delimited list of paths of the selected content fragments
 selection: encodeURIComponent(selectionIds.join('|')),
 }
 );

 // Open the route in the extension modal using the constructed URL
 guestConnection.host.modal.showUrl({
 title: "Generate Image",
 url: modalURL
 })
 }
 },

 }
 });

2. Creating the Modal Component

In GenerateImageModal.js:

import React, { useState } from 'react';
 import { Dialog, TextField, Button } from '@adobe/react-spectrum';

 export const ImageModal = ({ selection, onClose }) => {
 const [description, setDescription] = useState('');

 async function onSubmitHandler() {
 console.log('Started Image Generation orchestration');

 // Mark the extension as invoking the action, so the loading spinner is displayed
 setActionInvokeInProgress(true);

 // Set the HTTP headers to access the Adobe I/O runtime action
 const headers = {
 Authorization: `Bearer ${guestConnection.sharedContext.get('auth').imsToken}`,
 'x-gw-ims-org-id': guestConnection.sharedContext.get('auth').imsOrg,
 };

 // Set the parameters to pass to the Adobe I/O Runtime action
 const params = {

 aemHost: `https://${guestConnection.sharedContext.get('aemHost')}`,

 fragmentId: fragmentIds[0],
 imageDescription,
 };

 const generateImageAction = 'generate-image';

 try {
 const generateImageActionResponse = await actionWebInvoke(allActions[generateImageAction], headers, params);

 // Set the response from the Adobe I/O Runtime action
 setActionResponse(generateImageActionResponse);

 console.log(`Response from ${generateImageAction}:`, actionResponse);
 } catch (e) {
 // Log and store any errors
 console.error(e);
 }

 // Set the action as no longer being invoked, so the loading spinner is hidden
 setActionInvokeInProgress(false);
 }

  return (<Provider theme={defaultTheme} colorScheme="light">
     <Content width="100%">
         <Flex width="100%">
             <Form width="100%">
                 <TextField
                     label="Image Description"
                     description="The image description in natural language, for e.g. Alaskan adventure in wilderness, animals, and flowers."
                     isRequired
                     validationState={validationState?.propertyName}
                     onChange={setImageDescription}
                     contextualHelp=(<ContextualHelp>
                         <Heading>Need help?</Heading>
                         <Content>
                             <Text>
                                 The
                                 <strong>description of an image</strong>
                                 you are looking for in the natural language, for e.g. "Family vacation on the beach with blue ocean, dolphins, boats and drink"
                             </Text>
                         </Content>
                     </ContextualHelp>)
                 />
             <ButtonGroup align="end">
                 <Button variant="accent" onPress={onSubmitHandler}>Create Image</Button>
                 <Button variant="accent" onPress={() => guestConnection.host.modal.close()}>Close</Button>
             </ButtonGroup>
             </Form>
         </Flex>
     </Content>
 </Provider>
 );
 }
 };

3. Backend Actions

In actions/generateImage/index.js:

const { OpenAIApi } = require('openai');
 const { uploadAsset } = require('@adobe/aem-upload');

 async function main(params) {
 const { description, contentFragmentPath } = params;

 // Generate image using DALL·E
 const openai = new OpenAI({
 apiKey: params.OPENAI_API_KEY,
 });
 const response = await openai.images.generate({
 model: "dall-e-3", //dall-e-3
 prompt: params.imageDescription,
 n: 1,
 size: '1024x1024',
 quality: "standard",
 });

 generatedImageURL = response.data[0].url;

 // Download the image
 const imageBuffer = await downloadImage(imageUrl);

 // Upload image to AEM
 const binaryUpload = new DirectBinaryUpload();
 await binaryUpload.uploadFiles(options)
 .then((result) => {
 logger.info(`got response from image upload`);
 // Handle Error
 result.getErrors().forEach((error) => {
 if (error.getCode() === codes.ALREADY_EXISTS) {
 logger.error('The generated image already exists');
 }
 });

 // Handle Upload result and check for errors
 result.getFileUploadResults().forEach((fileResult) => {
 // log file upload result
 logger.info(`File upload result ${JSON.stringify(fileResult)}`);

 fileResult.getErrors().forEach((fileErr) => {
 if (fileErr.getCode() === codes.ALREADY_EXISTS) {
 const fileName = fileResult.getFileName();
 logger.error(`The generated image already exists ${fileName}`);
 }
 });
 });
 })
 .catch((err) => {
 logger.info(`Failed to uploaded generated image to AEM${err}`);
 });

 logger.info('Successfully uploaded generated image to AEM');


 // Update content fragment
 const body = {
 properties: {
 elements: {
 [imgPropName]: {
 value: relativeImgPath,
 },
 },
 },
 };

 const res = await fetch(`${params.aemHost}${fragmenPath.replace('/content/dam/', '/api/assets/')}.json`, {
 method: 'put',
 body: JSON.stringify(body),
 headers: {
 Authorization: `Bearer ${accessToken}`,
 'Content-Type': 'application/json',
 },

 });

 if (res.ok) {
 logger.info(`Successfully updated ${fragmenPath}`);
 return fragmenPath;
 }

Testing and Deploying Extensions

Local Testing

Preview in AEM

Add the devMode and localDevUrl parameters to your AEM URL:

https://experience.adobe.com/?devMode=true&localDevUrl=http://localhost:9080

Deployment

  1. Deploy to staging: aio app deploy

  2. Switch to production workspace: aio cloudmanager:set-workspace Production

  3. Deploy to production: aio app deploy

Enabling the Extension in AEM

  1. Navigate to the Extension Manager in AEM (experience.adobe.com > Extension Manager).

  2. Locate your extension under the appropriate environment.

  3. Enable the extension by toggling its status.

Example: Integrating OpenAI's DALL·E for Image Generation

Use Case: Authors want to generate images based on text prompts directly within the Content Fragment Admin Console.

Solution:

  1. Add a Custom Button: Use the Extension Manager to add a "Generate Image" button to the Content Fragment action bar.
  2. Create a Modal Dialog: When the button is clicked, display a modal where the author can enter a text prompt.
  3. Invoke OpenAI's API: Upon submission, the extension calls a serverless action that sends the prompt to OpenAI's DALL·E API.
  4. Upload the Generated Image: The returned image is uploaded to AEM Assets.
  5. Update the Content Fragment: The content fragment is updated to reference the new image.

Technical Implementation:

  1. Front-End: Built with React, using Adobe's React Spectrum components for consistency.
  2. Back-End: Node.js actions running on Adobe I/O Runtime handle API calls and interactions with AEM.

Benefits:

  1. Enhanced Authoring Experience: Authors can generate and use images without leaving AEM.
  2. Seamless Integration: No need to modify the core AEM codebase.
  3. Reusability: The extension can be used across different projects or shared with the community.

Benefits of Using Extensions:

  1. Flexibility: Tailor the AEM UI to specific business requirements.
  2. No Codebase Modification: Keep customizations separate, reducing the risk of impacting core functionality.
  3. Reusability: Share extensions across multiple projects or with the wider Adobe community.
  4. Simplified Management: Enable, disable, or configure extensions without redeploying AEM.
  5. Security: Extensions run within the authenticated context of the user, ensuring secure interactions.

Best Practices and Considerations

Conclusion

The AEM Extension Manager empowers developers to enhance and customize the AEM experience without the constraints of traditional development cycles. By leveraging UI extensions and Adobe's App Builder, you can deliver tailored functionalities that meet specific business needs, improve authoring experiences, and integrate powerful third-party services like OpenAI's DALL·E.

As the digital landscape continues to evolve, tools like the AEM Extension Manager will play a crucial role in enabling organizations to stay agile and innovative. Whether you're a developer looking to streamline your workflow or a content author seeking enhanced functionalities, the Extension Manager offers a path forward in the Adobe Experience Manager ecosystem.

Have questions or need assistance with your AEM projects? Feel free to contact us directly!

Contact us for more!

References: