Archive for the ‘Planet KDE’ Category

GPLv3, LGPLv3, AGPLv3 Discussion

Wednesday, August 24th, 2011

Hi!

Just a short notice: I have started a discussion about the exclusion of GPLv3, LGPLv3 and AGPLv3 by the current licensing policy at the kde-licensing mailing list—as promised in my previous blog post discussing some arguments. Will be offline for few days, thus do not wonder if I am not answering.

Regards
Jonathan

Unexpected Bottle Neck: vector<bool>

Tuesday, August 23rd, 2011

Hi folks!

The last paragraph is actually the interesting one for every C++-programmer, the leading-in might not be understandable.

The new lexer component of KDevelop-PG-Qt is not very fast, it can take several minutes with complex grammars, especially when using UTF-8 as internal encoding for the automata. Of course, first I want to finish the most important features and that for I am writing an example lexer for PHP to check its capabilities to spot bugs and missing features, but now I was curious and asked Valgrind/Callgrind for analysis. I expected a lot of useless copying/assignments because I have not done any optimisation yet and tried to be “defensive” by using too much copying. Indeed: For my first example about 78% of the runtime were spent on copying. But then I tried a full (as far as it is finished) PHP example with UTF-8 and it was a real surprise:

KDevelop-PG-Qt: Callgrind results: a lot of comparisons on vector<bool>

KCacheGrind Overview


It spends most of the time with comparing instances of std::vector<bool>! For the powerset construction I have to maintain following states for subsets of states in the NFA and after that I map those subsets to new unique IDs for the states of the DFA. For both cases I wanted to use a map with vector<bool> (a subset of states) as key_type, I could not decide if I should use map or unordered_map – I used map, but there is also some code to make it possible to use QBitArray with unordered_map, just for the case. However: I expected the minimisation to be a bottle neck (disclaimer: I stopped valgrind after some time, maybe it becomes a bottle neck later), but now the comparisons used in the map-implementation are crucial. I was not very economical with those map-operations – I could have done the mapping while computing the necessary subsets, it has to be optimised and I may use unordered_map and unordered_set (hashes) instead of map and set, but that is not the reason why I am writing this blog post. ;)

The interesting thing about it is the usage of std::_Bit_reference. std::vector<bool> has a specialised implementation, the booleans get packed, thus it will only need size()/8+const Bytes when it is full. There is a special hashing-implementation directly using the integers used to store the bits, but there is only a generic comparison implementation. Thus it will iterate over all the bits, instead of using comparisons of chunks (like 32 or 64 bit, depending on the CPU). Of course it will be much slower, keep in mind that each iteration requires some bit-manipulation. Especially in my case of a red-black-tree the vector will get compared to very similar ones next to it in the tree, and in that case it takes especially long time. Be aware of vector<bool>, it is often awesome (safe a lot of memory), but something like this slow comparison may happen, and always keep in mind that the index-operator might be slower than for vector<char&gt. Unfortunately there seems to be no API for accessing the integer-chunks. Btw. QBitArray does not support comparison at all. I have filed a bug-report to GCC/libstdc++.

The FSF about Free Games

Sunday, July 17th, 2011

I have noticed a new FSF-bulletin-article via identi.ca: The Free Game Lag by Danny Piccarillo.
The article is about the lack of FLOSS games. It neglects the theory that Free Software would be an unsuitable method for game development. Free games will evolve like any other fields of software, but currently it is low-priority, because games are not that important. Seriously, the arguments of that article are null and void, it does not take specific properties of game development into account, I want to explain my thoughts about the issue:

Some time ago I thought it would be impossible for Free Software to conquer game development. It is a lot of work involved with developing a big computer game, but there are no people having a specific commercial interest in the development of them, thus selling licenses seems to be the only possible business model (in comparison many companies are interested in the development of Linux or Apache). There will not be any RedHat or Google or whatever extensively sponsoring development of free games, nothing more than some GSOC Wesnoth projects (that is much less than big game industry). What was wrong about my thought? Game development does not necessarily need such a “business-model” to be successful. First of all we should notice that there are sophisticated Free Software game engines, XNA or similar proprietary software is not needed, there are e.g. Irrlicht, Ogre3D, Blender or Panda3D, sophisticated graphical effects, physics etc. seem not to depend on proprietary software any longer. When looking at the development of major games one may notice that there are seldomly epic new concepts, most of the time it is new content, i.e. new graphics/artwork, new story, new characters, new items, new quests etc. It is a lot of work, but: it can be done by communities. Gamers have already created huge modifications in their free time, once free games have reached a certain level including good graphics etc., there could be entirely new big communities of gamers, and because community and cooperation are integral parts of Free Software, such games would not stop growing. But currently most “serious” gamers are only recognising proprietary major games.

But of course those major 3D-shooter/RPG/strategy/…-games are not the only ones, many people are also playing so called “casual games”, they tend to be very wide spread—and proprietary. One may argue that casual gamers do not want to spend time for contributing, but I think there is enough hope they may be interested in it, too. The Gluon Project, we all know about, seems to have some very nice approaches, it is trying to build such communities for free games, which are currently not present, supported by OpenDesktop.org software (and hardware, but that is not that important). For 2D-realtime games it looks very promising. There are also some innovative approaches for turn-based games, e.g. short time ago I found out about Toss, it is a free research project combining game creation and general game playing (from AI point of view), I am sure it would be awesome if communities could be built around such a software as well (there is Zillions of Games, but it is proprietary).

When people ask you how gaming as we know it can exist in a free software world, you should open with your response with, “It can’t, but it can be better.”

That is definitely right, but there are specific properties which have to be taken into account. There are chances for free games, we should not forsake any hope just because it seems to be impossible with current business, we should hope that all those business (“big game industry”, “app development”, all using DRM etc.) will perish, although currently they seem to strengthen. Free Software and community can succeed when using entirely new methods.

PS:
What I have forgotten to write yesterday: crowd funding should be considered, too, why should ordinary gamers not pay for development instead of licenses if there are good chances that there will be some good results? That is often a good alternative to selling copies, especially unknown artists can benefit. Of course it should not be misused (that is often the case at kickstartet.com, people receive such funding and have no risks, but then they create DRMed products, sell it in the app store etc., that is strictly against the street performer protocol ;)). Btw. OpenDesktop.org supports donations.

Regarding Dynamic Typing

Sunday, July 10th, 2011

Currenty there seems to be a big hype about EcmaScript (JavaScript). Google probably wants to enslave the world with Chrome OS, where the system is not much more than an EcmaScript virtual machine provider (maybe it will support native applications through NaCl, like Android does not only allow Java…), Microsoft wants to reimplement their Windows interface, Qt invented QML, the EcmaScript extension we all know about, providing cool declarative features. Today I want to talk about a fundamental property of EcmaScript and why it just sucks: dynamic typing.

Clarification

First let us clarify the meaning of “dynamic typing”. “Dynamic typing” means that the type of expressions gets checked at runtime, expressions which may have any type are possible. It should not be confused with duck typing, e.g. many types using dynamic typing have a lot of built-in functions relying on specific types (“the frog will not get accepted”), e.g. most functions in PHP’s standard library (expecting string, integer, array or what ever). But for example C++ function templates in etc. provide duck typing (they will accept anything looking like an iterator), or the signatures in old g++ versions provided duck typing. Determining types may happen at compile time (type inference) even with dynamic typing, and of course optimising compilers/interpreters are doing that.

Impact on Development

Let us talk about an argument for dynamic typing: it makes life easier. Actually, that can be right, there are domains where you just do not want to care about such stuff, for example when writing a shell script with few lines of code or when quickly doing some calculations with a computer algebra system. But Typo3, MediaWiki, Windows, Plasma etc. are much more than that. Why do I doubt that dynamic typing makes life easier in those contexts? Because it is error-prone. It is always better when errors get detected at compile time. It is good to fulfill contracts when programming, and they should get verified at compile time, such that they can be easily found and will not annoy the user. A type of contract (not the only one, cf. design by contract) which has been used for long time is the type system. The programmer assures that a variable has a certain type. What happens in dynamically typed languages? You do not have to state the contract, the compiler (or code checker) will usually not be able to check it, it is just in your brain, but of course you will still rely on that contract, the type is something you rely on most of the time when programming, I know that x is an integer when using x for some arithmetics. But you will do mistakes and you get buggy software. That is the fundamental disadvantage when programming, but of course I have to compare it to the advantages of dynamic typing: you can write code quickly and efficiently not mentioning the type everywhere. But there are more proper ways to achieve that: use type inference. The type of a variable will be determined by the compiler when initialising it and you will get an error when you are trying to change the type. That is good because in most cases the type of a variable will not change. And you will get informed about undefined variables (a typo should not cause a runtime error, but in dynamically typed languages it does). For the case that you need a structure allowing different types at the same position there are algebraic data types. With algebraic data types you can state a contract with only few tokens (instead of a nested array/dictionary data structure with a layout which is just implicitly given by the manipulation of it, that does often happen in dynamically typed languages), for variable declaration you only need one token, maybe a single character. That minimalistic overhead in code length is definitely worth it once the software has reached a certain complexity. That threshold is probably not very high, annoying mistakes which could have been avoided with static type checking can already occur in small programs just computing some stuff or something like that.

Performance

Dynamic typing causes big overhead because instructions have to be choosen at runtime based on type information all the time. Of course it is much more complicated to optimise dynamically typed languages, there might be corner cases where the type is not the expected one, but the runtime has to care about it etc. I often read statements like “the performance critical parts are implemented natively” etc., but regarding the amount of applications running using such languages (JavaScript, PHP, Ruby, Python, Lua) we have to state: it is performance critical, PHP is used for more than a preprocessor, QML is used for more than just representing the UI, JavaScript is used for drawing a lot of complex stuff in the browser, Python gets used for scientific computations, and Ruby is establishing new standards regarding overhead (that is not true, Scheme has been that slow before ;), but Ruby allows modifying a lot of stuff at runtime, too). There is reasonable overhead—for abstraction, generalisation, internationalisation etc., but dynamic typing affects nearly any operation when running the program, that is unreasonable and of course it will sum up to significant overhead, although it is simply not needed (and bad for environment ;)).

Special Issues

Regarding extreme flexibility

First of all: in 95% of applications you do not need it, you do not have to modify types at runtime, adding member functions to classes or objects and all that stuff. Sometimes it may be a good way to establish abstraction etc., but in those cases there are usually alternatives: meta-programming can be done at compile time, when manipulating all the types in Ruby they usually could have been manipulated at compile time, too, but Ruby does not support sophisticated compile time meta programming (ML and Template Haskell do, in C++ and D it is kinda limited). Regarding collection of information, debugging etc. using such features: debugging facilities should not influence the performance and cleanness, I am sure by involvement of meta programming you could implement language features allowing that when debugging without neglecting the type system. And of course a lot of flexibility at runtime can be achieved without allowing any type everywhere: dynamic dispatch (including stuff like inheritance, interfaces, signatures and even multi-dispatch), variant types at few places (e.g. QVariant, although I think it is used too often, signals can be implemented in a type safe way, and there are those type safe plugin factories as alternative to QtScript and Kross), signals and slots, aspects etc.

Regarding EcmaScript

You might say that EcmaScript is becoming fast enough because of good compilers and extensions like type safe arrays (e.g. containing only floating points). But EcmaScript will stay EcmaScript, it will keep the downsides of dynamic typing, those type safe arrays are an ugly hack to make it feasible for some specific applications. It is simply lacking a proper type system and it will not get it.

Regarding QML

Using EcmaScript for QtScript was a pragmatic choice, no awesome innovation: there were many web developers knowing about JavaScript. Unfortunately that caused yet another way to integrate scripts and certainly not the most flexible one (cf. my previous blog post), for some reason they did not want to reuse KDE’s innovation (like QtCreator and KDevelop, but that is really a different topic…). QML is based on EcmaScript because QtScript had been based on it before. Dynamic typing is definitely not an inherent property of such declarative UI, most of it could have looked the same with a native implementation based on C++, but also implementations in Ruby or whatever would be easily possible. I have to admit that C++ is not perfect, it does not provide sophisticated meta programming, algebraic types or one-letter type inference (“auto” has four letters ;)), the last one may be a small problem, but overall it is simply not simple enough ;), languages like Scala, D and OCaml have certain problems, too. Hence some of the non-declarative code in QML would have been disproportionately complicated compared to the declarative code. The general approach of declarative UI is certainly good, and now we probably have to accept that it has been implemented using EcmaScript, we can accept it, as long as it is still possible to write Plasmoids using C++ or whatever etc.—obviously that is the case. Thus QML is generally a good development in my opinion, although implementing program logic in it is often not a good idea and although dynamic typing leaves a bitter aftertaste.

I hope you have got my points about dynamic typing. Any opinions?

Old Regression by Leonardo da Pisa

Saturday, June 11th, 2011

After reading this blog post I thought a bit about endianness (big-endian is just bad), and while having a shower a theory came into my mind: Maybe Arabs had little-endian integers (meaning least-significant bit first) but wrote (and still do) from right to left (meaning least-significant bit/digit at the right). And when Leonardo da Pisa (Fibonacci) brought Arabic numerals to Europe, he wrote in the same style, not flipping the digits, hence establishing big-endian. In fact I could verify that with Wikipedia. But I also noticed that this “bug” has been there before, Indians write from left to right (Wikipedia told me about a coin in Brahmi written from right to left, but that was before there were any numerals), and they have always used big-endian. Thus Arabs fixed that issue (maybe not knowingly), but stupid Europeans did not get why big-endian is stupid. Furthermore, big-endian numerals look more like those stupid Roman numerals, and our usual way of vocalising them is like in Roman times. And because of Leonardo da Pisa there are those stupid architectures using big-endian representation (fortunately not x86, amd64), causing non-portability, byte-order-marks and all that stupid stuff. And left-shifts could actually be left-shifts and right-shifts could be right-shifts.

Short list of arguments for little-endian:

  • Value of a digit d at position i is simply d·b**i (b is the base). That would obviously be the most natural representation if you would implement integers by using bit-arrays. It does not depend on the length, no look-ahead required.
  • You can simply add numbers from left to right (no right-alignment for summation).
  • For radix sort you can begin from left.
  • Simple cast between longer and shorte integers without moving any bits.
  • You do not need words like “hundred”, “ten-million”, “billiard” etc., because you can interprete a sequence online without look-ahead.
  • Repeating modulo and integer division by the base gives little-endian-representation.
  • The least-significant bits carry more interesting number theoretical information.

Well, big-endian is more like lexicographic order, although I am not sure if it is clearly better for natural languages. For division you have to start with the most-significant bit, but—hey—division is obviously not as important as all the other operations where you start with the least-significant bit. Of course sometimes little-endian is not a good notation, for measurements one should use floating point numbers (in a decimal world called “scientific notation”) and the mantissa should start with the most-significant bit/digit, after the exponent to avoid look-ahead (unlike the scientific notation).

If Leonardo da Pisa would have thought a bit about what he is doing, there would not be all those drawbacks! Just my thoughts about that regression. ;)

Skype Reverse Engineered

Thursday, June 2nd, 2011

Hi!

Good news for Free Software and open protocols: There has been a sucessful attempt to reverse engineer Skype (Magent URI). Nice timing, shortly after Microsoft’s acquisition Skype could finally be broken. :) He is also including modified Skype executables allowing debugging etc., which is usually prevented by really elaborated anti-features (encryption, kills itself if there is a debugger etc.). The sample code is able to send a message via Skype, awesome!

Now there are hopefully soon implementations for Telepathy or libpurple (used by Pidgin and Telepathy Haze). One may say we should not promote such proprietary protocols, but: For many people Skype is important (do not say you do not know anybody, it varies, in some groups everybody is using it, somewhere else it is different, I am not using it). The chances that they will switch to Free Software (KDE/GNU/Linux) are much higher if there is support for Skype without proprietary software. And once they start using Pidgin or Telepathy, it is no problem for them to use open protocols, too, you can simply tell them they have to click those buttons and then they can communicate with you using an open protocol like Jingle or SIP (or XMPP for text messages). Thus it does not only help to spread Free Software, but also to spread open protocols. And all future attempts to commercialise the private data created by Skype can be easily prevented. Even Windows-users may start using Pidgin or something like that against the advertisement. Regarding “it is just a hack”: They cannot simply change their protocol, because there is Skype hardware which cannot be updated and would not work any longer (that would make all non-idealist customers unhappy, too). And for WLM/.NET Messenger/MSN and Oscar/ICQ it has been working well since long time (even with Kopete ;)). Really, really great news!

Unification and Orthogonalisation: Part 3, HTML-Engines and Web Browsing

Thursday, June 2nd, 2011

Hi!

In this blog-post I want to share some thoughts about HTML-rendering-engines and web browsers in KDE. There seem to be some problems, some may be easier solved with KDE5 (breaking some API may help).

Does there have to be a KDE-browsing-experience?

When looking at browser-statistics for my blog, most people seem to use Firefox or Chrome/Chromium, using Konqueror or Rekonq is kinda exotic, browser statistics may be wrong (I am sometimes using “faked” user agent string if websites do not like Konqueror), but I evena know many KDE-users personally using Konqueror or Chromium. But I think KDE-browsers actually have potantial. Everybody is annoyed about Gtk+-file-selection-dialogues, but there are also real features provided by KDE, in some areas Rekonq provides unique interface, Konqueror can satisfy many wishes for flexibility. I am not that happy with Firefox and Chromium (quoting Martin Gräßlin “it is broken” (client side decoration ;))). KDE-web browsing should not be dropped, and certainly nobody is planning to do that.

Rendering engines

We should be honest about KHTML: We have to admit that the JavaScript support is poor and technically not up-to-date. Many websites are not working, even KDE-related ones, popular sites like Blogspot etc. KHTML is certainly not yet dead because kdewebkit is not yet a working alternative, it is lacking behind regarding KDE-integration (KDE style widgets are a must-have), and it cannot yet be considered to be stable, too. But there is one thing which would be definitely nice: a common interface. KDE is able to attract plugin-developers (see Plasma), it has great technical capabilities (KPluginFactory etc.), but they would have to be used. If there would be a unified way to access both KHTML-KParts and WebKit-KParts, it would make it more reliable for people wanting to write a plugin and of course it would be better for every user if KHTML-plugins could be used with WebKit in Konqueror. API could be unified with KDE 5, but there is still a lot time left, maybe an alternative should be found.

Konqueror and Rekonq

That has to be said: Rekonq started as a “hey, I will try to build a web browser” project, today it is the default web browser in Kubuntu and provides e.g. the awesome address-bar. It is now a more serious project, so why is it not integrated into KDE technology? Konqueror is excessively using KParts and KXmlGui, I think it would have been no problem to implement Rekonq’s user interface elements as plugins and using Konqueror’s infrastructure. Not only Konqueror would benefit, but it would provide built-in plugin system, PDF viewing, split view, synchronously displaying HTML and modifying via FTP etc. I am sure some performance optamisations could be done for KParts/Konqueror, too, without breaking everything. Now we have got two independent web browsers. Even with Dolphin it is a problem, the promised integration did not become true, the useful dockers (e.g. for Nepomuk), address bar etc. are not available in Konqueror, which is using its own sidebar unaffected by Dolphin. But for Rekonq it is even worse because Rekonq does not even try to integrate into KParts.

It would be nice if there would be a single, unified web browser with a reliable rendering/JavaScript engine using all the great technology KDE supplies – especially all the flexibility provided by kdeui.

Regards
A loyal Konqueror user

There is a Substantial Antagonism between Free Software and Capitalism

Sunday, May 29th, 2011

It is not uncommon that people want to tell me that Free Software and Free Knowledge fit nicely into the concepts of capitalism. They are right that they can do a good job for humanity by supporting Free Software or Free Knowledge while they are accepting capitalistic circumstances, feeling comfortable within them. But it is not true that FLOSS and Free Knowledge are about free markets and capitalism. Freedom is not about markets at all. Markets depend on the concept of scarcity. When supporting Free Software, Free Knowledge, Free Research etc. you are working against the concept of scarcity, they remove the scarcity where it is definitely not necessary. Anybody can benefit from software and knowledge, anybody can make it better. Everybody is allowed to copy it. That are the fundamental concepts of Free Software, and that is a fundamental antagonism to scarcity. The reason that it is working within capitalism is not a common ideology. The reason is: capitalism is not totalitarian. Most gouvernments do not want it to affect any aspect of life, there remains the freedom to act outside of it, to support other people, to have a family, to love somebody without revenue – and thus you are even allowed to fight against shortness, you can create Free Software.
But laws may change, any there are powerful parties opposing the ideas of cooperation because they benefit from scarcity. Thus people invented “intellectual property” and told us it would be a worthful ideal. That Free Software is using copyright for its copyleft is just a pragmatic approach necessary to achieve something in the current world. If there would be no scarcity for software, i.e. all software would be free, there would be no necessity for neither copyright nor copyleft.
Let us translate the ideals behind Free Software to other parts of economy: That would mean stopping scarcity, stopping shortness, allowing real freedom. E.g. it would mean making food and medicine freely available for everybody. With increasing automation there will be even less necessity of scarcity, but unfortunately some people benefit from it. Automation gives us the opportunity to stop scarcity and alienation in a smooth process without ruining economy. But it has to be used wisely – that means Free Knowledge and Free Software to prevent technocracy. It could finally remove the necessity of property. It could result in real, human freedom, in equal opportunities wihout alienation or structurally caused existential dependence. Then capitalism would be over. But maybe there is another alternative more likely to happen: We are heading into technocracy, all formal democracy will become worthless. Mighty persons or maybe their invisible (and evil!) hand will control intellectual property even more, will control all the people, will keep markets, scarcity, capitalism, poverty everywhere. Those who can controlown the knowledge and information can control everything human beings are able to control.

PS:
This article might be interesting to read for those who understand German.

Are you using Wolfram Alpha?

Tuesday, May 17th, 2011

Hi!

That blog-post will not be very long, because I want to sleep soon and do further investigations later.

Many of you will know about Wolfram Alpha, when I first saw it, I thought: hey, that is really cool. Many people are using it. But it has some short comings:

  • They want you to buy Mathematica
  • It is proprietary
  • You cannot create arbitrarily complicated queries
  • You cannot save temporary results
  • Sometimes English grammar is too ambiguous
  • No localization

Well, that service should not be used, it is actually not that great, and like Google Mail, Chrome OS, iOS or WP7 it is becoming really popular and is a threat to Free Software in my opinion, especially high qualified people are using it and stop caring about Free Software (“just a web-app”). But what are the alternatives?

  • There is Sage – an awesome computer algebra system combining a lot of free software, using Python instead of ugly, specialized languages like in Mathematica, Maple and Maxima
  • Semantic databases in RDF-format, e.g. DBPedia crawling the Wikipedia or governmental websites like data.gov or data.gov.uk

For people being able to use SPARQL the mentioned RDF-databases are very powerful tools – no limitations on queries or something like that. So what is missing? A nice interface anybody can use. It would be really nice to have a free tool parsing natural language queries (e.g. using Earley-parsers and some probabilistics, it would not really have to understand real sentences, just some fixed structures would be enough like “where”, “by”, “all” and fixed sets of attributes). Those queries could be transformed into e.g. DBPedia-SPARQL-queries, and the RDF-results could be transformed into some nice tables and maybe graphs. A free implementation would be a) awesome and b) much more seriously usable than Wolfram Alpha. Do you know about any project trying to do something like that? Any comments? What is your opinion about such software?

Regards

PS:
A few random thoughts about it:

  • Of course natural language stuff would be nice for math, too.
  • With SPARQL access from Sage/Python data-querying and calculations would also be combinable. A software could generate Python containing SPARQL.
  • I do not know if there are practically usable distributed RDF-databases, but such software could make it possible to distribute query evaluation to the peers, Free Software projects probably cannot afford Wolfram’s computing-capacity.
  • Combination of different RDF-databases may be a problemtask.

Mango, Papaya, Pomegranate

Tuesday, May 3rd, 2011

Hey guys, I found that in a German supermarket!

Mango, Papaya, Pomegranate in a German Rewe

Mango, Papaya, Pomegranate


Advertisement for fruits at Planet KDE, awesome, huh? Though Mangi for free would be much more awesome. That is not a satisfying reason for putting that at Planet KDE? Okay…
2009@copywrite by HR SCHUMACHER 8002

Small clipping


Hey, there is something funny in the picture! He cannot even spell “copyright” and he took “@” for “Ⓒ”. That should be relevant? Well, I want to tell you what I think about the role of “copywrite” in our society, it came into my mind when seeing this mistake: For most people copyright is really irrelevant, they do not benefit, they want to get paid for their work, and in those branches where people sell something the client could copy many times without telling the author, they sometimes utilise copyright, because they do not want to be exploitet, not the reusage is the bad thing for them, but the missing honesty, the client says it would be only for a small project and then he uses it unfairly. Today’s business models of some branches (music- and film-industry, not independent artists) strongly depend on copyright, but for most people it is just not that important, but they are not aware about it, thus they may not even be aware about how to spell it.