Hello. I have a scenario where an object class (called Entity) has an Image property of type File Object. Uploading an image file and attaching it to this property is working fine. I’ve added a second image property to this Entity class to be used as a preview (smaller version) image in a gallery context. Is it at all possible to auto-create this preview image and attach it to the preview image property using code? Here’s what I got so far:
// convert “Base64” to javascript “File Object”
function dataURLtoFile(dataurl, filename) {
var arr = dataurl.split(‘,’), mime = arr[0].match(/:(.*?);/)[1],
bstr = atob(arr[1]), n = bstr.length, u8arr = new Uint8Array(n);
while(n–){
u8arr[n] = bstr.charCodeAt(n);
}
return new File([u8arr], filename, {type:mime});
}
// execute both functions to create preview image
toDataURL(fileContentURL)
.then(dataUrl => {
const fileData = dataURLtoFile(dataUrl, “imageName.jpg”);
})
fileContentURL is the property from the File Object property of the Entity object in context. I surmise that the fileData I create at the end cannot be used, but using the dataUrl, can it be auto-stored to a run-time (temp) File Object and thus added to the context object’s ‘preview image’ property?
PS: This inquiry seems related to this: Returning data when executing async functions though I have not been able to get that solution to work (not sure how to implement it). Help with this would be appreciated as it’ll likely resolve my issue.
If the purpose is to show a compressed version of the image as a preview, you can achieve this by using Responsive Image on the Image Component.
This will automatically scale the image, based on the clients resolution. I have attached some screenshots from a test, showcasing the difference in resolution and filesize.
@ErikAKSkallevold Thank you for the Responsive Image tip. I was aware of this and perhaps I could use this as a solution in my case, though I’m not sure if the code behind the responsive screen size detection would work. My project is an image gallery, where I need a smaller size version (like a thumbnail) of each image for the gallery page itself, before selecting one item thus displaying the full-size version. Although I managed to create a smaller version using the answers here: How do I send a file object from a coded component to the datastore? - #5 by lbj, I’m still having trouble actually displaying this object using its File Content URL. I’ve verified that the content is truly there by pasting its blob string in a browser. I still haven’t managed to get this preview image to display.
Here’s the coded component code that’s starting when I enable it by clicking an Upload Image icon (setting App Variables.hasClickedUploadImage to true, which is later set to false in this code) :
The uploadImage action creates the PreviewImage file object by having “return window.the_blob;” as a function for providing the image data. This works.
I presume that images stored in Appfarm are done so to a CDN service. I’m familiar with this like f.ex. Cloudinary where images can be fetched in all sizes simply based on URL params. I’m also guessing images are cached based on messages I see in the console log. So in other words, if I can include the max size of the images listed in the gallery then this would be both fetched from the CDN and cached, so the thumbnails would hardly even require a server fetch by time (theoretically). Is this what said Responsive Image actually does or is the fetched size controlled by code reading the target screen size? There’s a big difference there for my particular gallery scenario.
The __file property is not available for persisted images, so this might be where the code fails? Instead, we need to fetch the Blob with the __fileContentLink instead.
I modified your code a bit, and was able to make it upload compressed images.
const uploadByFile = async (the_image) => {
try {
// Fetch the blob from the fileContentLink
const response = await fetch(the_image.__fileContentLink);
if (!response.ok) {
throw new Error('Failed to fetch the image');
}
const imageBlob = await response.blob();
const newImage = await blobToImage(imageBlob);
const canvas = document.createElement('canvas');
canvas.width = 400;
canvas.height = (newImage.naturalHeight / newImage.naturalWidth) * canvas.width;
const cntxt = canvas.getContext("2d");
if (cntxt) {
cntxt.drawImage(newImage, 0, 0, canvas.width, canvas.height);
const blob = await new Promise(resolve => canvas.toBlob(resolve, 'image/jpeg', 0.25));
window.the_blob = blob;
// Call the upload action, this depends on your appfarm setup
await appfarm.actions.uploadCompressedImage();
const url = appfarm.data.previewImage.get();
console.log({ url });
return {
success: 1,
file: { url }
};
} else {
console.log('Could not get canvas context!');
}
} catch (error) {
console.error('Error during upload:', error);
return { success: 0, error: error.message };
}
}
const blobToImage = (blob) => {
return new Promise((resolve, reject) => {
if (!(blob instanceof Blob)) {
return reject(new Error('Provided input is not a valid Blob or File'));
}
const url = URL.createObjectURL(blob);
const img = new Image();
img.onload = () => {
URL.revokeObjectURL(url);
resolve(img);
}
img.onerror = () => {
reject(new Error('Image could not be loaded'));
}
img.src = url;
});
}
// Reset hasClickedUploadImage var if needed
// appfarm.actions.unclick();
const btn = appfarm.element.querySelector("button");
const image = appfarm.data.image.get();
console.log({ image });
btn.addEventListener("click", function () {
uploadByFile(image);
});
Thank you! This fixed it and the compressed image is fetched and displayed nicely. I was sure I had tried using the __fileContentLink property without success which is why I had started using __file instead. Guess the main difference is this code:
// Fetch the blob from the fileContentLink
const response = await fetch(the_image.__fileContentLink);
if (!response.ok) {
throw new Error('Failed to fetch the image');
}
const imageBlob = await response.blob();
Anyways, main thing is that it works now. Thank you again!