Archive for the ‘Computing’ Category

The FSF about Free Games

Sunday, July 17th, 2011

I have noticed a new FSF-bulletin-article via identi.ca: The Free Game Lag by Danny Piccarillo.
The article is about the lack of FLOSS games. It neglects the theory that Free Software would be an unsuitable method for game development. Free games will evolve like any other fields of software, but currently it is low-priority, because games are not that important. Seriously, the arguments of that article are null and void, it does not take specific properties of game development into account, I want to explain my thoughts about the issue:

Some time ago I thought it would be impossible for Free Software to conquer game development. It is a lot of work involved with developing a big computer game, but there are no people having a specific commercial interest in the development of them, thus selling licenses seems to be the only possible business model (in comparison many companies are interested in the development of Linux or Apache). There will not be any RedHat or Google or whatever extensively sponsoring development of free games, nothing more than some GSOC Wesnoth projects (that is much less than big game industry). What was wrong about my thought? Game development does not necessarily need such a “business-model” to be successful. First of all we should notice that there are sophisticated Free Software game engines, XNA or similar proprietary software is not needed, there are e.g. Irrlicht, Ogre3D, Blender or Panda3D, sophisticated graphical effects, physics etc. seem not to depend on proprietary software any longer. When looking at the development of major games one may notice that there are seldomly epic new concepts, most of the time it is new content, i.e. new graphics/artwork, new story, new characters, new items, new quests etc. It is a lot of work, but: it can be done by communities. Gamers have already created huge modifications in their free time, once free games have reached a certain level including good graphics etc., there could be entirely new big communities of gamers, and because community and cooperation are integral parts of Free Software, such games would not stop growing. But currently most “serious” gamers are only recognising proprietary major games.

But of course those major 3D-shooter/RPG/strategy/…-games are not the only ones, many people are also playing so called “casual games”, they tend to be very wide spread—and proprietary. One may argue that casual gamers do not want to spend time for contributing, but I think there is enough hope they may be interested in it, too. The Gluon Project, we all know about, seems to have some very nice approaches, it is trying to build such communities for free games, which are currently not present, supported by OpenDesktop.org software (and hardware, but that is not that important). For 2D-realtime games it looks very promising. There are also some innovative approaches for turn-based games, e.g. short time ago I found out about Toss, it is a free research project combining game creation and general game playing (from AI point of view), I am sure it would be awesome if communities could be built around such a software as well (there is Zillions of Games, but it is proprietary).

When people ask you how gaming as we know it can exist in a free software world, you should open with your response with, “It can’t, but it can be better.”

That is definitely right, but there are specific properties which have to be taken into account. There are chances for free games, we should not forsake any hope just because it seems to be impossible with current business, we should hope that all those business (“big game industry”, “app development”, all using DRM etc.) will perish, although currently they seem to strengthen. Free Software and community can succeed when using entirely new methods.

PS:
What I have forgotten to write yesterday: crowd funding should be considered, too, why should ordinary gamers not pay for development instead of licenses if there are good chances that there will be some good results? That is often a good alternative to selling copies, especially unknown artists can benefit. Of course it should not be misused (that is often the case at kickstartet.com, people receive such funding and have no risks, but then they create DRMed products, sell it in the app store etc., that is strictly against the street performer protocol ;)). Btw. OpenDesktop.org supports donations.

Regarding Dynamic Typing

Sunday, July 10th, 2011

Currenty there seems to be a big hype about EcmaScript (JavaScript). Google probably wants to enslave the world with Chrome OS, where the system is not much more than an EcmaScript virtual machine provider (maybe it will support native applications through NaCl, like Android does not only allow Java…), Microsoft wants to reimplement their Windows interface, Qt invented QML, the EcmaScript extension we all know about, providing cool declarative features. Today I want to talk about a fundamental property of EcmaScript and why it just sucks: dynamic typing.

Clarification

First let us clarify the meaning of “dynamic typing”. “Dynamic typing” means that the type of expressions gets checked at runtime, expressions which may have any type are possible. It should not be confused with duck typing, e.g. many types using dynamic typing have a lot of built-in functions relying on specific types (“the frog will not get accepted”), e.g. most functions in PHP’s standard library (expecting string, integer, array or what ever). But for example C++ function templates in etc. provide duck typing (they will accept anything looking like an iterator), or the signatures in old g++ versions provided duck typing. Determining types may happen at compile time (type inference) even with dynamic typing, and of course optimising compilers/interpreters are doing that.

Impact on Development

Let us talk about an argument for dynamic typing: it makes life easier. Actually, that can be right, there are domains where you just do not want to care about such stuff, for example when writing a shell script with few lines of code or when quickly doing some calculations with a computer algebra system. But Typo3, MediaWiki, Windows, Plasma etc. are much more than that. Why do I doubt that dynamic typing makes life easier in those contexts? Because it is error-prone. It is always better when errors get detected at compile time. It is good to fulfill contracts when programming, and they should get verified at compile time, such that they can be easily found and will not annoy the user. A type of contract (not the only one, cf. design by contract) which has been used for long time is the type system. The programmer assures that a variable has a certain type. What happens in dynamically typed languages? You do not have to state the contract, the compiler (or code checker) will usually not be able to check it, it is just in your brain, but of course you will still rely on that contract, the type is something you rely on most of the time when programming, I know that x is an integer when using x for some arithmetics. But you will do mistakes and you get buggy software. That is the fundamental disadvantage when programming, but of course I have to compare it to the advantages of dynamic typing: you can write code quickly and efficiently not mentioning the type everywhere. But there are more proper ways to achieve that: use type inference. The type of a variable will be determined by the compiler when initialising it and you will get an error when you are trying to change the type. That is good because in most cases the type of a variable will not change. And you will get informed about undefined variables (a typo should not cause a runtime error, but in dynamically typed languages it does). For the case that you need a structure allowing different types at the same position there are algebraic data types. With algebraic data types you can state a contract with only few tokens (instead of a nested array/dictionary data structure with a layout which is just implicitly given by the manipulation of it, that does often happen in dynamically typed languages), for variable declaration you only need one token, maybe a single character. That minimalistic overhead in code length is definitely worth it once the software has reached a certain complexity. That threshold is probably not very high, annoying mistakes which could have been avoided with static type checking can already occur in small programs just computing some stuff or something like that.

Performance

Dynamic typing causes big overhead because instructions have to be choosen at runtime based on type information all the time. Of course it is much more complicated to optimise dynamically typed languages, there might be corner cases where the type is not the expected one, but the runtime has to care about it etc. I often read statements like “the performance critical parts are implemented natively” etc., but regarding the amount of applications running using such languages (JavaScript, PHP, Ruby, Python, Lua) we have to state: it is performance critical, PHP is used for more than a preprocessor, QML is used for more than just representing the UI, JavaScript is used for drawing a lot of complex stuff in the browser, Python gets used for scientific computations, and Ruby is establishing new standards regarding overhead (that is not true, Scheme has been that slow before ;), but Ruby allows modifying a lot of stuff at runtime, too). There is reasonable overhead—for abstraction, generalisation, internationalisation etc., but dynamic typing affects nearly any operation when running the program, that is unreasonable and of course it will sum up to significant overhead, although it is simply not needed (and bad for environment ;)).

Special Issues

Regarding extreme flexibility

First of all: in 95% of applications you do not need it, you do not have to modify types at runtime, adding member functions to classes or objects and all that stuff. Sometimes it may be a good way to establish abstraction etc., but in those cases there are usually alternatives: meta-programming can be done at compile time, when manipulating all the types in Ruby they usually could have been manipulated at compile time, too, but Ruby does not support sophisticated compile time meta programming (ML and Template Haskell do, in C++ and D it is kinda limited). Regarding collection of information, debugging etc. using such features: debugging facilities should not influence the performance and cleanness, I am sure by involvement of meta programming you could implement language features allowing that when debugging without neglecting the type system. And of course a lot of flexibility at runtime can be achieved without allowing any type everywhere: dynamic dispatch (including stuff like inheritance, interfaces, signatures and even multi-dispatch), variant types at few places (e.g. QVariant, although I think it is used too often, signals can be implemented in a type safe way, and there are those type safe plugin factories as alternative to QtScript and Kross), signals and slots, aspects etc.

Regarding EcmaScript

You might say that EcmaScript is becoming fast enough because of good compilers and extensions like type safe arrays (e.g. containing only floating points). But EcmaScript will stay EcmaScript, it will keep the downsides of dynamic typing, those type safe arrays are an ugly hack to make it feasible for some specific applications. It is simply lacking a proper type system and it will not get it.

Regarding QML

Using EcmaScript for QtScript was a pragmatic choice, no awesome innovation: there were many web developers knowing about JavaScript. Unfortunately that caused yet another way to integrate scripts and certainly not the most flexible one (cf. my previous blog post), for some reason they did not want to reuse KDE’s innovation (like QtCreator and KDevelop, but that is really a different topic…). QML is based on EcmaScript because QtScript had been based on it before. Dynamic typing is definitely not an inherent property of such declarative UI, most of it could have looked the same with a native implementation based on C++, but also implementations in Ruby or whatever would be easily possible. I have to admit that C++ is not perfect, it does not provide sophisticated meta programming, algebraic types or one-letter type inference (“auto” has four letters ;)), the last one may be a small problem, but overall it is simply not simple enough ;), languages like Scala, D and OCaml have certain problems, too. Hence some of the non-declarative code in QML would have been disproportionately complicated compared to the declarative code. The general approach of declarative UI is certainly good, and now we probably have to accept that it has been implemented using EcmaScript, we can accept it, as long as it is still possible to write Plasmoids using C++ or whatever etc.—obviously that is the case. Thus QML is generally a good development in my opinion, although implementing program logic in it is often not a good idea and although dynamic typing leaves a bitter aftertaste.

I hope you have got my points about dynamic typing. Any opinions?

Old Regression by Leonardo da Pisa

Saturday, June 11th, 2011

After reading this blog post I thought a bit about endianness (big-endian is just bad), and while having a shower a theory came into my mind: Maybe Arabs had little-endian integers (meaning least-significant bit first) but wrote (and still do) from right to left (meaning least-significant bit/digit at the right). And when Leonardo da Pisa (Fibonacci) brought Arabic numerals to Europe, he wrote in the same style, not flipping the digits, hence establishing big-endian. In fact I could verify that with Wikipedia. But I also noticed that this “bug” has been there before, Indians write from left to right (Wikipedia told me about a coin in Brahmi written from right to left, but that was before there were any numerals), and they have always used big-endian. Thus Arabs fixed that issue (maybe not knowingly), but stupid Europeans did not get why big-endian is stupid. Furthermore, big-endian numerals look more like those stupid Roman numerals, and our usual way of vocalising them is like in Roman times. And because of Leonardo da Pisa there are those stupid architectures using big-endian representation (fortunately not x86, amd64), causing non-portability, byte-order-marks and all that stupid stuff. And left-shifts could actually be left-shifts and right-shifts could be right-shifts.

Short list of arguments for little-endian:

  • Value of a digit d at position i is simply d·b**i (b is the base). That would obviously be the most natural representation if you would implement integers by using bit-arrays. It does not depend on the length, no look-ahead required.
  • You can simply add numbers from left to right (no right-alignment for summation).
  • For radix sort you can begin from left.
  • Simple cast between longer and shorte integers without moving any bits.
  • You do not need words like “hundred”, “ten-million”, “billiard” etc., because you can interprete a sequence online without look-ahead.
  • Repeating modulo and integer division by the base gives little-endian-representation.
  • The least-significant bits carry more interesting number theoretical information.

Well, big-endian is more like lexicographic order, although I am not sure if it is clearly better for natural languages. For division you have to start with the most-significant bit, but—hey—division is obviously not as important as all the other operations where you start with the least-significant bit. Of course sometimes little-endian is not a good notation, for measurements one should use floating point numbers (in a decimal world called “scientific notation”) and the mantissa should start with the most-significant bit/digit, after the exponent to avoid look-ahead (unlike the scientific notation).

If Leonardo da Pisa would have thought a bit about what he is doing, there would not be all those drawbacks! Just my thoughts about that regression. ;)

Skype Reverse Engineered

Thursday, June 2nd, 2011

Hi!

Good news for Free Software and open protocols: There has been a sucessful attempt to reverse engineer Skype (Magent URI). Nice timing, shortly after Microsoft’s acquisition Skype could finally be broken. :) He is also including modified Skype executables allowing debugging etc., which is usually prevented by really elaborated anti-features (encryption, kills itself if there is a debugger etc.). The sample code is able to send a message via Skype, awesome!

Now there are hopefully soon implementations for Telepathy or libpurple (used by Pidgin and Telepathy Haze). One may say we should not promote such proprietary protocols, but: For many people Skype is important (do not say you do not know anybody, it varies, in some groups everybody is using it, somewhere else it is different, I am not using it). The chances that they will switch to Free Software (KDE/GNU/Linux) are much higher if there is support for Skype without proprietary software. And once they start using Pidgin or Telepathy, it is no problem for them to use open protocols, too, you can simply tell them they have to click those buttons and then they can communicate with you using an open protocol like Jingle or SIP (or XMPP for text messages). Thus it does not only help to spread Free Software, but also to spread open protocols. And all future attempts to commercialise the private data created by Skype can be easily prevented. Even Windows-users may start using Pidgin or something like that against the advertisement. Regarding “it is just a hack”: They cannot simply change their protocol, because there is Skype hardware which cannot be updated and would not work any longer (that would make all non-idealist customers unhappy, too). And for WLM/.NET Messenger/MSN and Oscar/ICQ it has been working well since long time (even with Kopete ;)). Really, really great news!

Unification and Orthogonalisation: Part 3, HTML-Engines and Web Browsing

Thursday, June 2nd, 2011

Hi!

In this blog-post I want to share some thoughts about HTML-rendering-engines and web browsers in KDE. There seem to be some problems, some may be easier solved with KDE5 (breaking some API may help).

Does there have to be a KDE-browsing-experience?

When looking at browser-statistics for my blog, most people seem to use Firefox or Chrome/Chromium, using Konqueror or Rekonq is kinda exotic, browser statistics may be wrong (I am sometimes using “faked” user agent string if websites do not like Konqueror), but I evena know many KDE-users personally using Konqueror or Chromium. But I think KDE-browsers actually have potantial. Everybody is annoyed about Gtk+-file-selection-dialogues, but there are also real features provided by KDE, in some areas Rekonq provides unique interface, Konqueror can satisfy many wishes for flexibility. I am not that happy with Firefox and Chromium (quoting Martin Gräßlin “it is broken” (client side decoration ;))). KDE-web browsing should not be dropped, and certainly nobody is planning to do that.

Rendering engines

We should be honest about KHTML: We have to admit that the JavaScript support is poor and technically not up-to-date. Many websites are not working, even KDE-related ones, popular sites like Blogspot etc. KHTML is certainly not yet dead because kdewebkit is not yet a working alternative, it is lacking behind regarding KDE-integration (KDE style widgets are a must-have), and it cannot yet be considered to be stable, too. But there is one thing which would be definitely nice: a common interface. KDE is able to attract plugin-developers (see Plasma), it has great technical capabilities (KPluginFactory etc.), but they would have to be used. If there would be a unified way to access both KHTML-KParts and WebKit-KParts, it would make it more reliable for people wanting to write a plugin and of course it would be better for every user if KHTML-plugins could be used with WebKit in Konqueror. API could be unified with KDE 5, but there is still a lot time left, maybe an alternative should be found.

Konqueror and Rekonq

That has to be said: Rekonq started as a “hey, I will try to build a web browser” project, today it is the default web browser in Kubuntu and provides e.g. the awesome address-bar. It is now a more serious project, so why is it not integrated into KDE technology? Konqueror is excessively using KParts and KXmlGui, I think it would have been no problem to implement Rekonq’s user interface elements as plugins and using Konqueror’s infrastructure. Not only Konqueror would benefit, but it would provide built-in plugin system, PDF viewing, split view, synchronously displaying HTML and modifying via FTP etc. I am sure some performance optamisations could be done for KParts/Konqueror, too, without breaking everything. Now we have got two independent web browsers. Even with Dolphin it is a problem, the promised integration did not become true, the useful dockers (e.g. for Nepomuk), address bar etc. are not available in Konqueror, which is using its own sidebar unaffected by Dolphin. But for Rekonq it is even worse because Rekonq does not even try to integrate into KParts.

It would be nice if there would be a single, unified web browser with a reliable rendering/JavaScript engine using all the great technology KDE supplies – especially all the flexibility provided by kdeui.

Regards
A loyal Konqueror user

Der RSB zu „Internet, Software und Revolution“ und dem Fall Guttenberg

Wednesday, June 1st, 2011

Der Revolutionär Sozialistische Bund/ⅠⅤ. Internationale schrieb vor einem Monat über „Internet, Software und Revolution“ und ging dabei insbesondere auch auf Freie Software sowie als Aufhänger die Plagiats-Affäre Karl-Theodor zu Guttenbergs ein. Unbezweifelbar der Vorteil des Internets, das schnelle Kooperation und dank zumindest teilweise vorhandenen offenen, indizierten Informationsquellen eine effiziente Arbeit ermöglichte. Letzteres ist natürlich leider nur teilweise gegeben, Google Books und SpringerLink stellen nicht gerade ein Non-Plus-Ultra dar, und sollten wissenschaftliche Werke egtl. im Sinne der Allgemeinheit geschaffen werden, was dank staatlicher Vorfinanzierung im akademischen Bereich relativ bequem mäglich wäre.

Weiter im Text: Der RSB kritisiert die Kritik an der Kritik an Guttenbergs Plagiarismus,

Letzten Endes lief diese ganze Kritik darauf hinaus, dass Guttenberg sich nicht den bürgerlichen Eigentumsverhältnissen unterworfen hat, bzw. deren in die Welt der Wissenschaft gedachter Verlängerungslinie. Natürlich ist es zynisch, wenn ein Multi-Millionär, dessen ganzes Eigentum auf eben diesen Verhältnissen beruht, sie genau in dem Moment bricht, wo es seinem egoistischen Privatinteresse dient.

Abgesehen vom Wahrheitsgehalt des herausgestellten Zynismusses, denke ich, dass doch noch einiges unabhängig von (urheber-)rechtlichen Erwägungen für eine moralische Verurteilung dieses Plagiarismus steht:

  • Der RSB stellt als Gegenmodell Freie Lizenzierung wie etwa in der Wikipedia dar. Nun ist aber die Wikipedia etwas gänzlich anderes als eine Dissertation, erstere besteht aus Tertiärtexten, zweitere ist ein Primärtext. Hier gilt es weiter zu unterscheiden.
  • Eine Doktorarbeit soll nicht eine umfassende Erläuterung bekannter Sachverhalte sein, sondern primär die neuen Forschungsergebnisse eines (angehenden) Wissenschaftlers (oder Politikers, der sich etwas darauf einbilden will) darstellen. Die „Redundanz“ die ein Plagiat mit dem Original aufweist, ist hier nicht von Nöten, gefragt sind nur die neuen Ergebnisse, deren Vorstellung mit je nach Disziplin mehr oder weniger Zitaten besser durchgeführt werden kann.
  • Zudem geht es um den Nachweis einer persönlichen Forschungsleistung, Guttenberg hat hier einmal mehr mit Unehrlichkeit geglänzt (man erinnere sich an den Vorfall mit dem Tankwagen).
  • Man mag das ganze System von akademischen Graden und persönlicher Leistung in Frage stellen, dennoch (ob es dem RSB passt oder nicht): Es wird Menschen immer auch um Selbstverwirklichung, um Individualismus gehen, und das ist gut so, zudem sollte eine Forschungsleistung auch auf den (die) Urheber zurückzuführen sein. Somit lässt sich auch Selbstdarstellung nicht vermeiden, und der Plagiarismus bleibt ein Beschmücken mit fremden Federn, niederträchtige Lüge.

Ich denke nicht, dass die meisten Menschen ihre Kritik aus den „bürgerlichen Eigentumsverhältnissen“ heraus motivierten, sondern schlichtweg die Unehrlichkeit des Ministers zu Tage treten sahen, und diese moralisch verurteilten.

Im Folgenden wird die Absurdität des Eigentumsprinzip für Wort und Software eindrücklich dargestellt, man ist weg vom Aufhänger Guttenberg. Interessant ist dann der Übergang zum Thema „Revolution“, wie es der Titel verspricht. Ich zitiere einen hervorstechenden Satz:

Die Unverträglichkeit von Autoritarismus und Internet zeigte sich daran, dass die Diktaturen in ihren letzten Tagen das Internet schlicht abschalten ließen. Wenn sich bereits so etwas wie facebook für eine Umwälzung nutzen lässt, dann können wir nur erahnen, in welchem Umfang sich Plattformen nutzen ließen, die von vornherein für hierarchiefreie Kommunikation gemacht sind.

Ich stimme gänzlich zu, dass sich dadurch neue Möglichkeiten schaffen lassen, in demokratischer Kontrolle, demokratischer Entscheidungsfindung – wirtschaftlich und politisch –, wie sie in einer weniger technisierten Welt kaum möglich erschienten. Konstituierender Bestandteil dabei sollte die Freiheit sein, Freie Software, Freies Wissen, ich empfehle das Lesen des Artikels.

There is a Substantial Antagonism between Free Software and Capitalism

Sunday, May 29th, 2011

It is not uncommon that people want to tell me that Free Software and Free Knowledge fit nicely into the concepts of capitalism. They are right that they can do a good job for humanity by supporting Free Software or Free Knowledge while they are accepting capitalistic circumstances, feeling comfortable within them. But it is not true that FLOSS and Free Knowledge are about free markets and capitalism. Freedom is not about markets at all. Markets depend on the concept of scarcity. When supporting Free Software, Free Knowledge, Free Research etc. you are working against the concept of scarcity, they remove the scarcity where it is definitely not necessary. Anybody can benefit from software and knowledge, anybody can make it better. Everybody is allowed to copy it. That are the fundamental concepts of Free Software, and that is a fundamental antagonism to scarcity. The reason that it is working within capitalism is not a common ideology. The reason is: capitalism is not totalitarian. Most gouvernments do not want it to affect any aspect of life, there remains the freedom to act outside of it, to support other people, to have a family, to love somebody without revenue – and thus you are even allowed to fight against shortness, you can create Free Software.
But laws may change, any there are powerful parties opposing the ideas of cooperation because they benefit from scarcity. Thus people invented “intellectual property” and told us it would be a worthful ideal. That Free Software is using copyright for its copyleft is just a pragmatic approach necessary to achieve something in the current world. If there would be no scarcity for software, i.e. all software would be free, there would be no necessity for neither copyright nor copyleft.
Let us translate the ideals behind Free Software to other parts of economy: That would mean stopping scarcity, stopping shortness, allowing real freedom. E.g. it would mean making food and medicine freely available for everybody. With increasing automation there will be even less necessity of scarcity, but unfortunately some people benefit from it. Automation gives us the opportunity to stop scarcity and alienation in a smooth process without ruining economy. But it has to be used wisely – that means Free Knowledge and Free Software to prevent technocracy. It could finally remove the necessity of property. It could result in real, human freedom, in equal opportunities wihout alienation or structurally caused existential dependence. Then capitalism would be over. But maybe there is another alternative more likely to happen: We are heading into technocracy, all formal democracy will become worthless. Mighty persons or maybe their invisible (and evil!) hand will control intellectual property even more, will control all the people, will keep markets, scarcity, capitalism, poverty everywhere. Those who can controlown the knowledge and information can control everything human beings are able to control.

PS:
This article might be interesting to read for those who understand German.

Are you using Wolfram Alpha?

Tuesday, May 17th, 2011

Hi!

That blog-post will not be very long, because I want to sleep soon and do further investigations later.

Many of you will know about Wolfram Alpha, when I first saw it, I thought: hey, that is really cool. Many people are using it. But it has some short comings:

  • They want you to buy Mathematica
  • It is proprietary
  • You cannot create arbitrarily complicated queries
  • You cannot save temporary results
  • Sometimes English grammar is too ambiguous
  • No localization

Well, that service should not be used, it is actually not that great, and like Google Mail, Chrome OS, iOS or WP7 it is becoming really popular and is a threat to Free Software in my opinion, especially high qualified people are using it and stop caring about Free Software (“just a web-app”). But what are the alternatives?

  • There is Sage – an awesome computer algebra system combining a lot of free software, using Python instead of ugly, specialized languages like in Mathematica, Maple and Maxima
  • Semantic databases in RDF-format, e.g. DBPedia crawling the Wikipedia or governmental websites like data.gov or data.gov.uk

For people being able to use SPARQL the mentioned RDF-databases are very powerful tools – no limitations on queries or something like that. So what is missing? A nice interface anybody can use. It would be really nice to have a free tool parsing natural language queries (e.g. using Earley-parsers and some probabilistics, it would not really have to understand real sentences, just some fixed structures would be enough like “where”, “by”, “all” and fixed sets of attributes). Those queries could be transformed into e.g. DBPedia-SPARQL-queries, and the RDF-results could be transformed into some nice tables and maybe graphs. A free implementation would be a) awesome and b) much more seriously usable than Wolfram Alpha. Do you know about any project trying to do something like that? Any comments? What is your opinion about such software?

Regards

PS:
A few random thoughts about it:

  • Of course natural language stuff would be nice for math, too.
  • With SPARQL access from Sage/Python data-querying and calculations would also be combinable. A software could generate Python containing SPARQL.
  • I do not know if there are practically usable distributed RDF-databases, but such software could make it possible to distribute query evaluation to the peers, Free Software projects probably cannot afford Wolfram’s computing-capacity.
  • Combination of different RDF-databases may be a problemtask.

Graphical KDevelop-PG-Qt Output

Tuesday, April 26th, 2011

KDevelop-PG-Qt is really boring, it is just generating some boring C++-code. Well, I have not implemented QML-based animations or multitouch-gestures for KDevelop-PG-Qt, but now you can get .dot-output, graphs which can be visualized using GraphViz, e.g. dot on the command-line or KGraphViewer. That way you can visualize the finite-state-machines used for generating the lexer. I guess everybody knows what this is:

utf8 dfa overview

Overview of the DFA


You have not got it? Let us zoom in:
utf8 dfa, rectangular excerpt

Rectangular view

utf8 dfa, square exceprt

Square view

You can also download the .dot-file and browse it using KGraphViewer, or browse the .svg-file (generated by using dot) with Gwenview or whatever. What is this automaton about? It as actually quite simple: The automaton will read single bytes from UTF-8-encoded input and recognize if the input represents a single alphabetic character (e.g. A or ? or ? or whatever). That is quite complicate, because there are many ranges in Unicode representing alphabetic characters and UTF-8 encoding makes it more complicate. This DFA is an optimal (minimum number of states) Moore-automaton (previous versions used Mealy for output, but Moore for optimization, that was stupid), and it needs 206 states, it is really minimal, no heuristics. You are right: Unicode is really complicated. Unfortunately it took 65 seconds to generate the lexer for this file:

%token_stream Lexer ; -- necessary name
 
%input_encoding "utf8" -- encoding used by the deterministic finite automaton
 
%token ALPHABETIC ; -- a token
 
%lexer ->
  {alphabetic} ALPHABETIC ; -- a lexer-rule
  ;

I think such automatons like {alphabetic} should be cached (currently unimplemented), because they are responsible for most of the runtime. The implementation of the .dot-output is still both buggy and hackish, that has to be changed. But the graphs look nice and they help with spotting errors in the lexer.

Do we want to be 1991 forever?

Monday, April 25th, 2011

Hi!

KDE’s licensing policies do not allow GPLv3+, LGPLv3+ and AGPLv3+ software in KDE’s repositiories (I guess it is for git, too, not only for SVN). But do we really want to keep that policy? There are more and more web-applications, ugly cloud-stuff, “software as a service”, is growing, and developers want to protect their Free Software by using the GNU Affero General Public Licens e. KDE is going to be adapted on embedded devices, and we should not care about tivoization? We should only use a 1991 license not targeting a lot of important issues of our times? Why should a KDE-application not be relicensed under the conditions of the GLPv3+ or AGPLv3+? In many cases that may be good. Why should there not be new GPLv3/AGPL development? Should developers fearing cloud-services not be integrated into KDE-community? We are not BSD, we want to protect ourselves, WebOS is coming soon, GPLv2′s copyleft may become completely noneffective, and we – usually accepting copyleft – should be stuck in GPLv2? That may be really, really bad. Should we care about GPLv2 only software which would not be able to integrate (A)GPLv3 software? No, they should start caring about the problems of GPLv2, and we should also think about the (A)GPLv3 only software we are currently excluding from being integrated into KDE. Btw: ownCloud already violates the licensing policy (though it may be invalid, because it does not mention git), and I am sure we will find more “mislicensed” software in the repositories.

Regards