Hi,

I am currently exporting data to a CSV file so to be viewed by Excel.
I need to support the exporting of data in Japanese so therefore the
CSV needs to support Unicode. The issue is that Excel seems to have
problems differentiating a file that contains Unicode chars. After
exploring the internet I have found that you need to use a BOM in the
first few bytes of the file to tell excel what sort of encoding has
been used. I have found different codes to use but cannot seem to
find much information on what the codes actually stand for. Here are
the codes I have tried.

0xEF 0xBB 0xBF This actually maintains the comma separation but the
text is garbage I am assuming that this is because this one sets UTF-8
encoding when I am just using straight unicode.

0xFF 0xFE This actually makes Excel open the CSV file in Unicode
correctly displaying the Japanese characters but it defaults to a TAB
deliminated separator instead. At the moment I can simply use tabs to
separate fields instead of the commas but I anticipate maybe not all
spread sheeting programs have the tab deliminated feature?

I guess what I am after is some more information on BOM's and how
Excel interprets them. Does anyone know of a BOM code that will tell
Excel that the CSV file is comma separated and that it contains
straight Unicode.

Thanks for taking the time to read this and have a nice day....
Danny Mosquito