Fetch encoding problem

Problem

Hello guys, I have a problem. Im a bit new to “fetch” in JS, and if anyone could show me what im doing wrong that would be awesome. I’m trying to parse a website which holds the names of all bus-stops. The problem I seem to be having is that the site is encoded in “windows-1250” which makes characters like “ž” be shown as “?” in fuse (but correctly in my browser - Chrome). Is there a way to correct this?

fetch('http://www.ap-ljubljana.si/vozni_red2/VR2.php?DATUM=04.03.2016&VSTOP_IME=vir&IZSTOP_IME=Adle').then((response) => {
  response.text().then((data) => {
    debug_log(data);
  });
});

more info:

  1. these characters display correctly when I type them manually
  2. the problem is the same on the device as on the PC preview (and in console)
  3. when the reponse.text() resolves the data variable already contains ? instead of the characters

My thinking is, that maybe the fetch method forcebly sets encoding to utf-8 which looses the characters in the decoding process?

History

I have already once written this app (for my thesis), and it used the XAMPP stack for the back end, where my PHP server would handle the scraping of data and host it as json for my Android app to use. I had the same problem there and I fixed it with:

// This is a stripped down version of my code

$result = utf8_encode(curl_exec($ch));
$full = json_encode($result);

$find = array("\u009a","\u00e8" ,"\u00e6","\u009e","\u00f0","\u008a","\u00c8","\u00c6","\u008e","\u00d0");
$with = array("š","č","ć","ž","đ","Š","Č","Ć","Ž","Đ");
$final = str_replace($find, $with, $full);

I had to encode the data with utf-8 to be able to use json_encode on it, and then replaced the codes to the correct characters manually.

I am now trying to get rid of the back end, and let the phones scrape the data themselves, and since developing in Android is a pain, I decided to give Fuse a try… and its awesome ^^

PS.

I don’t have access to the server so I cannot change the encoding to utf-8, and there is no API available so I’m scraping the data (I left that code out for simplicity and because it has nothing to do with my problem :P)

Currently we only support Utf8. I recommend that you just set up some server that can proxy the request and preprocess it the way you want it. You have the possibility to decode the data yourself in Uno if you have no other options. It’s a huge task to get textencoding right. So we just have to set some limitations at this point, even though some encodings is simpler to add.