Serializing large graphs with SIXX in GemStone

Hi guys,

While I promised to start a series of posts about GemStone itself, today I wanted to write down some notes about something I’ve been doing in the last days.

Serializing object graphs with SIXX

One possible way of moving objects between different GemStone instances or even between GemStone and Pharo, is a serializer. But the serializer must be able to serialize in one dialect and materialize in the other. Such serializer could also be used as backups (besides the GemStone full backups).

For that, one simplest approach we have is SIXX, which is a XML-based serializer. One of the biggest drawbacks of such a serializer is the memory consumed when serializing or materializing large graphs of objects.

In my case I need to serialize/materialize conceptual databases. These are large enough so that SIXX will crash and run out of memory (the classical “VM temporary object memory is full”) . GLASS free version allows 2GB of Shared Page Cache, so the max temporal object space that a VM can hold should be less than that. If your SIXX export/import crashed with an out of memory, this post presents a trick that may help you.

Making SIXX temporal data persistent

This trick (thanks Dale for sharing) actually only works for GemStone, not for Pharo. But still, it’s useful.  When SIXX crashes with an out of memory it’s because SIXX creates a lot of TEMPORAL NON PERISTENT data that cannot fit in memory. Since they are not persistent, those objects cannot go to disk and hence the out of memory.

SIXX’s port to GemStone provides a kind of API to be able to define an array instance which should be persistent (should be reachable from UserGlobals or another SymbolDictionary). Internally, then SIXX stores a few things in such an array, like the stream and some other temp data. Since now this array is persistent it means that everything reachable from that array can go and come back from disk as needed. So…yes, we will have trashing likely (lots of objects moving back and forth from memory to disk), but the process and execution of the export/import should finalize correctly.

There is a little problem with this trick and it’s the fact that if you do not make a GemStone commit, even if the defined array and all what was stored there are “persisted”, they are not ready to go to disk until you commit. Only after you commit, those persisting objects could be moved to disk if more memory is needed.

Writing and reading with UTF8 encoding

Something that I also needed was to be able to write the SIXX resulting XML file with UTF8. Of course, the materialization also should be done reading UTF8. For that, I used Grease port to GemStone.

The code and explanation

Please, take a look to the code. I have added lots of comments so that besides documenting here in the block, I get the documentation also in the code 😉  All problems and solutions I found are explained in the code.

The serialization

Screen Shot 2015-03-05 at 5.17.39 PM


The materialization:

Screen Shot 2015-03-05 at 5.19.27 PM

Now running out of SPC???

Well… in my case, when I tried to above code, it still didn’t work. In other words…after making sure I was doing the SIXX export and import in a forked gem, and surrounding the code in a #commitOnAlmostOutOfMemoryDuring: the operation was not yet working. I was getting an error ” FindFreeFrame: potential infinite loop”. From what I read that seems to indicate that the SPC is fully occupy. When you are inside a GemStone transaction and you have created new persistent objects, al those objects must fit in the SPC at the time you do the #commit.

Dale said: “if you were to look at a statmon you would find that GlobalDirtyPages were filling the cache .. the dirty pages due to a transaction in progress (i.e., you are doing a commit and writing the objects from TOC to SPC) cannot be written to disk until the transaction completes … and it cannot complete until it can write all of the dirty objects from the TOC to the SPC …”

OK…now…If you see the #commitOnAlmostOutOfMemoryDuring: the memory it is talking about is the GEM_TEMPOBJ_CACHE_SIZE not the SPC (SHR_PAGE_CACHE_SIZE_KB). Unfortunately, I have other places like SIXX export / import where I do heavy / bulk operations, that I have not yet migrated to the new way of using a temporal persistent root and forked gems. Therefore, for the time being, my GEM_TEMPOBJ_CACHE_SIZE is quite big. To give an example, I could a SPC of 1GB and a 900MB temp space:


Continue with #commitOnAlmostOutOfMemoryDuring: you will see the threshold of the “almost out of memory” is 75%. So…75% of 900MB is 675MB. SPC – 765MB = 325MB. In other words….my SIXX block will commit when my gem temp space arrives to 765MB. Most SIXX temp data should be persistent because we are using that hook to define a persistent root. Therefore, most of that 765MB should be persistent. So…to conclude, what I think it was happening is that I have a really big temp space (close to SPC size) and a high threshold for the commit. Hence, I was filling up the SPC before I was able to commit.


1) Do not use #commitOnAlmostOutOfMemoryDuring but instead split your own code in your own specific commits. For example, in my case I could split the SIXX serialization/materialization in operations where I commit every X numbers of objects. Or whatever. But this is use-case specific.

2) Code a variation of #commitOnAlmostOutOfMemoryDuring: where you pass as an argument a smaller threshold. For example, with a 50% it worked correct for me.

3) Set a smaller GEM_TEMPOBJ_CACHE_SIZE for the forged gems. This should be the better solution because with a TOC that approaches the size of SPC you are always in danger of being able to create more dirty objects than will fit in the SPC and thus will not be able to commit.

For my app, I finally decided to use a GEM_TEMPOBJ_CACHE_SIZE of 75% of SHR_PAGE_CACHE_SIZE_KB. That is still a problem if I have many gems doing large commits…but I need it. And then, I also did the export/import of SIXX above to commit on a threshold of 50%.

Executing this from within Seaside?

If you try to do these exports / imports from Seaside callbacks, you will see it will likely not work, so you will have to invoke them from GemTools, tODE or any other Gemstone client. This is because the used hook to commit #commitOnAlmostOutOfMemoryDuring:  will commit the opened transaction Seaside has. If you don’t use GLASS or you manage transactions yourself then the results could vary.

When Seaside transaction is committed it will make Seaside to redirect you to home page because there is none GemStone transaction opened left. To solve this and other issues that I will try to explain in another post, we can use separate VMs (usually called Service VMs) that take care of that while not affecting the Seaside ones. Thanks Dale and Otto for sharing this too.

Future Work

The next step if I want to stay with SIXX would be to use the XML Pull parser which should use less memory. Another possibility could be to use STON, but I am not sure if it is 100% working in GemStone..or maybe try to port Fuel…I tried once and I let half tests running 🙂



What is GemStone? Part 2

A bit on GemStone history

When talking about Smalltalk, one of the advantages always mentioned is its “maturity”. In the previous post, I commented some GemStone features. If you read them carefully, it seems like if we were talking about a modern technology that couldn’t have been possible years ago. Wrong!!!!  GemStone Systems was funded in 1982 and I think the first release was a few years after. Of course, not all the features I commented exist back then, but nobody can discuss it’s history. That means that when using GemStone not only you will be getting the “maturity” of Smalltalk, but also it’s own maturity as an object database.

For a long time, GemStone was owned and developed by GemStone Systems. In 2010, VMWare acquired GemStone Systems. However, later on, in 2013, GemStone and all other Smalltalk products were acquired by a new company called GemTalk Systems. I don’t know all the details (if you want, you can check them online)… but what I think that matters most is the fact that now GemTalk Systems has all GemStone engineers working, it does not depend on higher companies decisions (like VMWare) and is 100% focused in Smalltalk!

If you want to have a look at a general overview of the company and the impact of their products, I recommend the slides of this presentation.

Why GemStone is even more interesting now than before?

Just in case… I will clarify again: in these posts, I always give my opinion. Not everybody should agree with me.

Let’s go back some years ago. At that time, a few things happened:

  1.  Most of the developed apps were fat desktop and GemStone did not have a UI nor an IDE to develop.
  2. There was no good open-source and business friendly Smalltalk (I said it was my opinion!).
  3. GemStone did not have a free license.

The above things meant that someone developing a fat client app would require two Smalltalks: one for the UI and GemStone as the database. That also meant paying two licenses, one for the commercial Smalltalk for the UI and one for GemStone. And that could have been expensive for certain uses. However, things have changed in the recent years:

  1. Most apps are now web based so we do not need a fat UI.
  2. There is a very cool open-source and business friendly Smalltalk: Pharo.
  3. Gemstone does offer a free license with generous limits (in future posts, I will explain better the limits).

That means that you can develop a whole web app in Pharo, put the code in Gemstone and run it from there. And… paying no license (with GemStone free license limits). This is why I think GemStone is even more interesting now than it was ever before.

A bit more about fat client vs web based

When using an app with fat client and GemStone as object database, we actually have two Smalltalk communicating with each other. It is not like “I develop in Smalltalk Whatever and I deploy in GemStone”. No… it is both, Smalltalk Whatever and GemStone running and each of them is communicating to each other. This means there must be some connection or some kind of mapping/adaptor between the two, because both can have some differences in the kernel classes. This kind of software is what GemTalks Systems sells as “GemBuilder”. So we have “GemBuilder for VisualWorks” and “GemBuilder for VisualAge” etc… I have never used these products so I can’t talk much about them.

Just a last comment. Of course, building a GemBuilder for a Smalltalk dialect seems “easy” in the sense that GemStone is also a Smalltalk. But what if there were GemBuilders for other languages so that these can use GemStone as the object database?  Well, there is also a “GemBuilder for Java“. This tell us a little about the internal GemStone architecture (the object repository process is a bit decoupled from the VirtualMachines running the language). But we will see this later.

What do people mean by “Develop in Pharo and deploy in GemStone”

In a very first step, an app could be both developed and deployed in Pharo. That means we use Pharo tools to develop it and we also use Pharo to run our application in production. This may work well enough for small apps or a prototype. But, at some point, we may need more power. As I discussed in the previous post, there are many alternatives. Not all solutions are available for all situations. The solution I am interested in this post is to directly run (deploy) your app in GemStone. Which are the requirements? The app cannot be fat client (in Pharo, this means the app should not be a Morphic app). It could be either a web app or a rest server or whatever form that doesn’t involve a fat UI.

In fact… I guess you could even use GemStone as your backend language and database and provide a REST api answering JSON or whatever to a mobile app (maybe even using Amber??? I don’t know…).

With this alternative, the idea is to develop in Pharo. Then… we simply load our code (using Metacello, Git, Monticello, whatever) into GemStone and we run it there. Hence, “develop in Pharo and deploy in GemStone”. Of course… all the code we have developed in Pharo may not work perfectly in Gemstone or it may behave a little bit different. So some adjustments and work will likely must be done when making the app to work in GemStone besides Pharo. But we will talk about this in a future post.

Most of the times, the app continues to be able to be run by Pharo (besides developing with it). So you can likely continue to develop, run, test and debug your app locally with Pharo. And then periodically you deploy and test it in GemStone.

To sum up

I hope I have clarified a bit the different scenarios when using GemStone and what people mean when they say “develop in Pharo and deploy in GemStone”. All my posts from now onward will take this scenario in mind.

See you soon,

What is GemStone?

What is GemStone

When you ask a Smalltalker what Smalltalk is, you will find many different answers: a language, an environment, an object system, a platform or simply a combination of all of those or more. With GemStone, I have a similar feeling. I think different people will answer differently. To me, GemStone is an object system with two big concepts included: an object database and a language. Others will say that it’s a transactional or persistent Smalltalk, an object database, etc.

Before continuing, let me clarify a few things for this post and all the posts I will write after this one:
– I will not be discussing here relational databases vs object databases vs NoSQL. That’s a whole other discussion that I am not willing to write about right now.
– These posts are aimed mostly for Smalltalkers and GemStone newbies, but not for GemStone experts.

Ok…that being clarify…let’s start. When I refer to an object database, I mean exactly that: an Object Database Management System. Rather than dealing with tables as in relational databases, we can directly persist and query objects. Most of the OODB I have seen in other languages, are kind of an external piece of software that is only a database (just as relational databases are). For the moment, just imagine any of the relational databases you know but storing objects instead. In this case, you still need a language (and probably a VM running that language) for your application logic and you still must communicate to the database to perform the storage and retrieval of objects. I know… that should already be easier than dealing with relational databases but I personally think it could be better.

GemStone goes a step forward. What if the “database” would also include the language to run your application? Sounds cool, doesn’t it? So, this is the second concept GemStone has: it’s also a language implementation in itself. And which language? Smalltalk, of course!!! This means GemStone IS a Smalltalk dialect, just as any other dialect like Pharo, Visual Works, VisualAge, etc. So… GemStone is a Smalltalk dialect but also acts as an object database. You might be thinking “any Smalltalk can act as an object database because we have image persistency”. Fair enough. However, image persistency lacks lots of needed features to be a really scalable database (we will talk about this in other posts).

GemStone analogy to an image-based Smalltalk

As I said, the aim of these posts is to explain GemStone in a way that most readers can get it. And sometimes a good way to do so is by making a comparison to what we already know. So… let’s take an example with Pharo. Say we have one application running in one image. Soon, one image can start to be too little power and we need to scale. Smalltalk is cool and allow us to run the very same image with N number of VMs. Ok… so now we have 10 VMs running our app. Imagine this app needs persistency (as most apps do). If the database is outside Pharo (say a relational DB, NoSQL, etc), then we have no problem since the access to the database from multiple images will be correctly synchronized. But would you be allowed to use image persistency in this scenario? Of course not, because it’s not synchronized among all the VMs. But hell… that would be nice, wouldn’t it?

GemStone offers exactly what I would like: multiple (hundreds) Smalltalk VMs running and sharing the same “image” (repository/database of objects) in a synchronized fashion.

Note, however, that GemStone does NOT have a UI (it is headless) nor development tools (no IDE). So you still need another Smalltalk to develop your app code. And this is why the Pharo / GemStone combination is so great. But I will talk about this in another post.

To sum up

So you are the happiest programmer on the block. Your language has closures (have you ever try to imagine not using closures again???), an amazingly simple syntax, a small learning curve, decades of maturity, serious open-source and free dialects available, etc. Now I tell you can literally run hundreds of Smalltalk VMs all sharing the same repository of objects. But not only that… also imagine not having to write a simple mapping to SQL. Imagine that saving an object in the database is just adding an object to a collection (“clientList add: aClient”) and a query as a normal select (“clientList select: [:each | each age = 42 ]”). BTW… Did someone notice that, apart from selecting via an instance variable (‘age’ in this example), I can also send other domain specific messages? Say…. “clientList select: [:each | each associatedBankPollicy = BankPolicy strict ]”.

Ok, you might still not be convinced. What if I also tell you GemStone supports:

  • Multiple-user database support
  •  Indexes and reduce conflict Collection classes
  • Distributed env (imagine that your hundred VMs can also be running in different nodes!
  • Fault tolerance
  • Security at different levels (even at object level)
  • 64 bits VMs and multi-cpu VMs
  • Free license with generous limits

Ok…too much info for today. As you can note, I am very happy with these technologies so I will try to be objective… but I cannot promise you anything hahaha! I hope I have provoked and intrigued you enough to read on the future posts.

Stay tuned,

My presentations at “Summer School on Languages and Applications”, Bolivia 2014

A few months ago, I was invited to give some talks at “Summer School on Languages and Applications” in Bolivia. I offered several topics I was able to talk about and 3 of them were chosen. The target of the presentations were mostly university students who haven’t seen Smalltalk never before. So my presentations tried to be as friendly as possible for that public.

The presentations were:

Web Development with Smalltalk: This talk was actually mostly about Seaside, but I also gave a very short intro to Smalltalk and, at the end, showed a bit of Amber. Besides the slides, I showed some simple demos. In addition, at the end of the talk, I showed  the financial commercial application we are developing for a client and a video about the great Yesplan.

Smalltalk and Business:  I gave a quick overview of the Smalltalk advantages but from the enterprise point of view. I also talked quite a bit about Pharo (and why a strong open source dialect would matter), its Association and Consortium and all community related stuff. Finally, I explained a little bit about GemStone and Seaside and how the combination of all that ends on what is the stack of frameworks I enjoy the most these days.

Marea: Application-Level Virtual Memory for Object-Oriented Systems: This talk was very similar to the one of my PhD defense… but I tried to explain it a bit simpler (and I removed/skipped some slides).

The general feedback I received was positive and what I found is that most of the people were surprised that one could do web apps and real business with Smalltalk. Most students seem curious and wanting to learn more.

Everybody treat us very well. Everywhere.  In the street, in a restaurant, in the university, in the hotel, etc. The people were very gentle and respectful. We even had time to do some tourism:

visit to Saint Peter Hill

So…. in general, it was a very good experience and I really enjoyed my time there.

Reviving my blog

Almost 2 years ago, I wrote my last post. What has happened since then that I didn’t write again? Nothing strange. Typical excuses… too much work and little time for writing blogs, contributing to open-source projects, actively participating  in mailing lists, etc.

Since I have finished my PhD two years ago, I have been working as an independent software developer. I had the pleasure of being 100% busy with different projects all in Smalltalk. I really can’t complain. I do have some free time now so I will try to revive this blog that I have always enjoyed a lot.

What I will be writing about? Most of my topics used to be about low-level stuff: some about Virtual Machine, some about meta-programming, frameworks, Smalltalk deep and internal details, my PhD topic, etc. But what I will be writing now? Well… my context has changed a bit. While I do still develop some internal frameworks, libraries or low-level code, I am mostly dealing with normal business developments, persistency issues, web development (Seaside, jQuery, Bootstrap, Ajax, etc), Pharo, GemStone, deploy and configuration, security, sysadmin, etc. So I guess my posts will be likely related to that.

Years ago, I wrote a long series of posts I called “Journey through the Pharo Virtual Machine” which was intended to VM newbies. I personally think that those posts contributed a bit to the huge work from the Pharo team to ease the compiling and building of the Pharo VM (mostly for none VM hackers). That made a lot of progress in the VM field. Now, it is very common to read in the Pharo mailing list about people who easily compiled the VM or fixed a bug or did a port to platform X or whatever… So I think we are now seeing the fruits of a long effort that started years ago by a lot of people.

Something that is in my head these days is to do a similar series of posts but about GemStone Smalltalk. In the same way I was not an expert in the VM field when I wrote the previous series of posts, I am not either a GemStone expert. There are people with much more knowledge than me. However, I feel I have a few things to share and that some other people would like to be aware of. So….what do you think?  Would you be interested?

Ok…this was all for today. I really hope I can revive this blog and start writing again which I have always enjoyed.




Headless support for Cog Cocoa VM

Hi guys.

As you may know, I finished my PhD in Computer Science in France and I am now back in my country, Argentina. I have started working as a freelancer/consultant/contractor/independent. If you are interested in discussing with me, please send me a private email.

For a long time, Pharaoers and Squakers have been asking for headless support in the Cocoa VMs just as we have with Carbon VMs. Carbon is becoming a legacy framework so people needed this.
 I wanted to take this opportunity to thanks Square [i] International for sponsoring me to implement such a support. Not only have they sponsored the development but they have also agreed to release it under MIT license for the community. This headless support will be included in the official Pharo VM and will be, therefore, accessible to everybody. You can read more details in the ANN email.

So…thanks Square [i] International for letting me work in something so much fun and needed.

LZ4 binding for Pharo

Hi guys. In the last days I wrote a Pharo binding for the LZ4 compressor (thanks to Camillo Bruni for pointing out), and so I wanted to share it. The main goal of LZ4 is to be really fast in compressing and uncompressing but not to obtain the biggest compression ratio possible.

The main reason why I wrote this binding is for Fuel serializer, with the idea of compressing/uncompressing the serialization (ByteArray) of a graph. Hopefully, with a little bit of overhead (for compressing and uncompressing), we gain a lot in writing to the stream (mostly with files and network). However, the binding is not coupled with Fuel at all.

I have documented all the steps to install and run LZ4 in Pharo here.  Please, if you give it a try, let me know if it worked or if you had problems.

I would also like to do some more benchmarks with it, because so far I only did a few. So if you have benchmarks to share with me, please do it.

So far LZ4 does not provide a streaming like API. We tried with Camillo to build a streaming API in Pharo (like ZLibWriteStream, GZipWriteStream, etc) but the results were not good enough. So we are still analyzing this.

Ahhh yes, for the binding I use Native Boost FFI, so I guess I will wrote a post soon to explain how to wrap a very simple library with NB.

See you,

Dr. Mariano Martinez Peck :)

Hi guys. Last Monday 29th of October, I did my PhD defense and everything went well (mention très honorable!) so I am now officially a doctor 🙂  My presentation was 45 mins long and I liked how it went. Have you ever wondered why I was involved in Fuel serializer, Ghost proxies, VM hacking, Moose’ DistributionMaps, databases, etc? If so, you can see the slides of my presentation here. Notice that there  lots of slides and this is because I have several animations and each intermediate step is a new slide in a pdf.

After my presentation, the jury had time to ask me any questions they had and give feedback. Lots of interesting questions and discussions came from there. After a private discussion between the members of the jury,  the president read my defense report and we followed with a cocktail with drinks and snacks.

The presentation was recorded (thanks Santi and Anthony for taking care) but now I am processing it … I will let you know when this is ready.

The jury was composed by 8 persons, 4 of which were my supervisors:

-Pr. Christophe Dony, Lirmm, Univ. Montpellier, France.
-Pr. Robert Hirschfeld, HPI, Postdam, Germany.
-Dr. Jean-Bernard Stéfani, DR Equipe SARDES, INRIA Grenoble-Rhone-Alpes, France.
-Dr. Roel Wuyts, Principal Scientist at IMEC et Professeur à l’universté catholique de Leuven, Belgium.
-Dr. Stéphane Ducasse, DR Equipe RMod, INRIA Lille Nord Europe, France.
-Dr. Marcus Denker, CR Equipe RMod, INRIA Lille Nord Europe, France.
-Dr. Luc Fabresse,  Ecole des Mines de Douai, Université de Lille Nord de France
-Dr. Noury Bouraqadi,  Ecole des Mines de Douai, Université de Lille Nord de France

So, the PhD has reached its end. Now it is time to move to a different stage.

See you,

Dr. Mariano Martinez Peck 🙂

My PhD defense: “Application-Level Virtual Memory for Object-Oriented Systems”

Hi all. After 3 years of hard work, my “PhD journey” is arriving to an end (which means, among others, that it is now time to search a job again hahaha). The defense will take place on Monday, October 29, 2012 at Mines de Douai, site Lahure, room “Espace Somme”, Douai, France.

After the defense there will be a kind of cocktail with some food and drinks. If you are reading this and you are interested, you are more than invited to come 🙂 Just send me a private email for further details.

The following is the title and abstract of the thesis:

Application-Level Virtual Memory for Object-Oriented Systems

During the execution of object-oriented applications, several millions of objects are created, used and then collected if they are not referenced. Problems appear when objects are unused but cannot be garbage-collected because they are still referenced from other objects. This is an issue because those objects waste primary memory and applications use more primary memory than what they actually need. We claim that relying on operating systems (OS) virtual memory is not always enough since it is completely transparent to applications. The OS cannot take into account the domain and structure of applications. At the same time, applications have no easy way to control nor influence memory management.

In this dissertation, we present Marea, an efficient application-level virtual memory for object-oriented programming languages. Its main goal is to offer the programmer a novel solution to handle application-level memory. Developers can instruct our system to release primary memory by swapping out unused yet referenced objects to secondary memory.

Marea is designed to: 1) save as much memory as possible i.e., the memory used by its infrastructure is minimal compared to the amount of memory released by swapping out unused objects, 2) minimize the runtime overhead i.e., the swapping process is fast enough to avoid slowing down primary computations of applications, and 3) allow the programmer to control or influence the objects to swap.

Besides describing the model and the algorithms behind Marea, we also present our implementation in the Pharo programming language. Our approach has been qualitatively and quantitatively validated. Our experiments and benchmarks on real-world applications show that Marea can reduce the memory footprint between 25% and 40%

Halting when VM sends a particular message or on assertion failures

This post is mostly a reminder for myself, because each time I need to do it, I forget how it was 🙂

There are usually 2 cases where I want that the VM halts (breakpoint):

1) When a particular message is being processed.

2) When there is an assertion failure. CogVM has some kind of assertions (conditions) that when evaluated to false, mean something probably went wrong. When this happens, the condition is printed in the console and the line number is shown. For example, if we get this in the console:


It means that the condition evaluated to false. And 41946 is the line number. Great. But how can I put a breakpoint here so that the VM halts?

So….what can we do with CogVM? Of course, we first need to build the VM in “debug mode”. Here you can see how to build the VM, and here how to do it in debug mode. Then we can do something like this (taken from an Eliot’s email)

McStalker.macbuild$ gdb
GNU gdb 6.3.50-20050815 (Apple version gdb-1515) (Sat Jan 15 08:33:48 UTC 2011)
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB.  Type "show warranty" for details.
This GDB was configured as "x86_64-apple-darwin"...<wbr />Reading symbols for shared libraries ................ done
(gdb) break warning
Breakpoint 1 at 0x105e2b: file /Users/eliot/Cog/oscogvm/macbuild/../src/vm/gcc3x-cointerp.c, line 39.
(gdb) run -breaksel initialize ~/Squeak/Squeak4.2/trunk4.2.image
Starting program: /Users/eliot/Cog/oscogvm/macbuild/ -breaksel initialize ~/Squeak/Squeak4.2/trunk4.2.image
Reading symbols for shared libraries .+++++++++++++++..................................................................................... done
Reading symbols for shared libraries . done

Breakpoint 1, warning (s=0x16487c "send breakpoint (heartbeat suppressed)") at /Users/eliot/Cog/oscogvm/macbuild/../src/vm/gcc3x-cointerp.c:39
39              printf("\n%s\n", s);
(gdb) where 5
#0  warning (s=0x16487c "send breakpoint (heartbeat suppressed)") at /Users/eliot/Cog/oscogvm/macbuild/../src/vm/gcc3x-cointerp.c:39
#1  0x0010b490 in interpret () at /Users/eliot/Cog/oscogvm/macbuild/../src/vm/gcc3x-cointerp.c:4747
#2  0x0011d521 in enterSmalltalkExecutiveImplementation () at /Users/eliot/Cog/oscogvm/macbuild/../src/vm/gcc3x-cointerp.c:14103
#3  0x00124bc7 in initStackPagesAndInterpret () at /Users/eliot/Cog/oscogvm/macbuild/../src/vm/gcc3x-cointerp.c:17731
#4  0x00105ec9 in interpret () at /Users/eliot/Cog/oscogvm/macbuild/../src/vm/gcc3x-cointerp.c:1933
(More stack frames follow...)

So the magic line here is “(gdb) break warning” which puts a breakpoint in the warning() function. Automatigally, the assertion failures end up using this function, and therefore, it halts. With this line we achieve 2)

To achieve 1) the key line is “Starting program: /Users/eliot/Cog/oscogvm/macbuild/ -breaksel initialize ~/Squeak/Squeak4.2/trunk4.2.image”. Here with “-breaksel” we can pass a selector as parameter (#initialize in this case). So each time the message #initialize is send, the VM will halt also in the warning function, so you have there all the stack to analyze whatever you want.

I am not 100% sure but the following is what I understood about how it works from Eliot:

So, if I understood correctly, I can put a breakpoint in the function warning() with “break warning”. With the -breaksel  parameter you set an instVar with the selector name and size. Then after, anywhere I can send  #compilationBreak: selectorOop point: selectorLength   and that will magically check whether the selectorOop is the one I passes with -breaksel and if true, it will call warning, who has a breakpoint, hence, I can debug 🙂   AWESOME!!!!   Now with CMake I can even generate a xcode project and debug it 🙂  

That was all. Maybe this was helpful for someone else too.