I’m currently working on porting an app from Xamarin, and the underlying transport mechanism between the app and our DB is a webservice returning results in JSON. I’ve managed to successfully make the Fetch calls to our webservice to retrieve the JSON, but need to consume the images that are retrieved in a base64 format as part of the JSON result.
Is there a mechanism (whether JS or Uno) that I can use for handling base64 images instead of using a URL or filesource?
Then you need to create a custom ImageSource, in this case we get most of it free by inheriting TextureImageSource. This should be good enough to use as a starting point for you:
using Uno;
using Uno.Graphics;
using Uno.UX;
using Fuse.Resources;
using Experimental.TextureLoader;
public class Base64ImageSource : TextureImageSource
{
void SetTexture(texture2D texture)
{
Texture = texture;
}
string _base64image;
[UXContent]
public string Base64
{
get { return _base64image; }
set
{
if (_base64image != value && value != null)
{
_base64image = value;
var stripped = _base64image.Replace("data:image/png;base64,", "").Replace("data:image/jpeg;base64,", "");
var data = Uno.Text.Base64.GetBytes(stripped);
try
{
if (_base64image.StartsWith("data:image/png"))
TextureLoader.PngByteArrayToTexture2D(new Buffer(data), SetTexture);
else if(_base64image.StartsWith("data:image/jpeg"))
TextureLoader.JpegByteArrayToTexture2D(new Buffer(data), SetTexture);
}
catch(Exception e)
{
debug_log e.Message;
}
}
}
}
}
After that you can use your new ImageSource like this:
I’m guessing this is because the image comes in at one resolution and is displayed at another. Since the images in Fuse are rendered using a super fast and simple blit shader, I’m guessing the “fuzziness” is the result of hardware bilinear filtering. The UIImage ctor likely does some CPU downscaling with better filtering than the GPU provides, which is why specifying a scaling param in the ctor would make it look better.
The closest thing we have to a solution for this in Fuse currently is our MultiDensityImageSource, though it might not cover the case where the image resource itself isn’t known to the compiler, like you’re doing, unless you can serve the images in multiple resolutions from your server (and if you’re doing that, might as well just serve the resolution you need, which would probably fix the problem as well). It may be possible that we add upfront rescaling like the UIImage ctor seems to have to the Experimental.TextuureLoader class in the future, but as that’s experimental, it’s unfortunately not the highest priority right now. It might also be the case that we detect cases like these where we know GPU filtering might not look the best and prescale internally, but that’s a bit far-out timewise as well
In the meantime, you can try to play with different StretchMode’s on the Image class and see if any of those look better.
Then you need to create a custom ImageSource, in this case we get most of it free by inheriting TextureImageSource. This should be good enough to use as a starting point for you:
using Uno;
using Uno.Graphics;
using Uno.UX;
using Fuse.Resources;
using Experimental.TextureLoader;
public class Base64ImageSource : TextureImageSource
{
void SetTexture(texture2D texture)
{
Texture = texture;
}
string _base64image;
[UXContent]
public string Base64
{
get { return _base64image; }
set
{
if (_base64image != value && value != null)
{
_base64image = value;
var stripped = _base64image.Replace("data:image/png;base64,", "").Replace("data:image/jpeg;base64,", "");
var data = Uno.Text.Base64.GetBytes(stripped);
try
{
if (_base64image.StartsWith("data:image/png"))
TextureLoader.PngByteArrayToTexture2D(new Buffer(data), SetTexture);
else if(_base64image.StartsWith("data:image/jpeg"))
TextureLoader.JpegByteArrayToTexture2D(new Buffer(data), SetTexture);
}
catch(Exception e)
{
debug_log e.Message;
}
}
}
}
}
After that you can use your new ImageSource like this:
Would this be the best approach to render an image coming from Java native code?
Context: Camera preview (using an adaptation of CameraPanel project to capture the frames without displaying them). Each frame is processed and should be displayed. So what we need is to render those natively processed frames.
Hi Anders and thanks for the reply.
Yes, actually we are using CameraPanel, but needed help on the rendering part (because the obtained frames are processed internally before rendering: AR).
I posted the full question here: