Recently, I was contacted by Fuhyi Kuo, a conservator also interested in origami. She had learned from someone who attended the 2023 German Convention about the models folded by Shuzo Fujimoto that were in my custody, and wanted to help with their preservation. I was very happy to accept her tips since preserving various artifacts for long periods of time is much more tricky than you might think. Paper, in particular, is susceptible to aging and degradation from many factors, including chemicals that come from the paper itself. These models, folded by Shuzo Fujimoto in the early 1980’s, have already survived around 40 years in good condition, and I want to ensure they can be preserved for the years to come.
Based on Fuhyi’s hints, I bought museum-grade packaging at a specialist online store. The new storage consists of envelopes made from acid-free paper and acid-free cardboard boxes. Envelopes come in different sizes to match the models. I also got an acid-free folder, in which I placed Fujimoto’s letters, each separated from the next one by a sheet of acid-free paper. Needless to say, just as previous packaging, the new set also protects all models and letters from sunlight. The envelopes, boxes, folders, and dividers being acid-free means that contact with the packaging should have as little negative effect on the stored items as possible. In the image, you can see the process of moving individual models from their original storage in a plastic stock book into individual envelopes stored in boxes.
I think this is an interesting case of origami preservation, and it shows how origami can lead to very interesting cross-disciplinary cooperation. I am very grateful to Fuhyi Kuo for her getting in touch with me, explaining the factors that affect paper’s longevity, and providing specific hints on how to store it. If you are in possession of old origami models or other similar items, I hope this post can help you preserve them in good condition. Some papers will degrade quickly to start with, but even with high quality papers, preserving them over the span of decades or more requires conscious effort.
]]>In early 2023, I presented my idea for a roadmap of using AI in origami, so I thought I’d revisit the topic briefly now and see what has changed during the last year.
One thing we can clearly see is how quick the progress is, especially in the hot field of generative AI. When I first tested origami-themed image generation less than two years ago, the images I got were of really poor quality most of the time, and usually resembled actual origami little. Today, state-of-the-art models can generate quite convincing models, some of which could fool even a trained eye. Given that origami is just a narrow and very specific field, and of little commercial interest, it is not the focus of these models, and thus the possibilities would be much greater if a model was trained on origami specifically.
Overall, I think I would not change much in the roadmap I laid out in the above-mentioned post. Machine Learning models improve quickly, but the ways I see they could be used in origami are mostly the same as before. However, tools, both those used for applying AI models and those used for building them, have improved, which probably makes creating origami-specific models more approachable than before. One issue that often arises when trying to train a model for a specific task, and which I mentioned as a potential roadblock, is the amount of data needed for supervised learning (which happens to be necessary for many practical applications). Meanwhile, there are companies that you can outsource data labeling to. This costs money, of course, but it means that one could build a dataset for origami-related AI much quicker than a single developer ever could. Another possibility would be setting up a data labeling platform and asking the origami community to volunteer the time needed to label the data set. Both options mean that labeling even a large set of data needed for building an origami-specific AI model seems to me much more feasible than it did a year ago.
On the other hand, the high computational costs of generative AI have driven most providers to introducing paid subscriptions for their state-of-the-art models, so getting access to the really powerful tools for free seems to be no longer possible. There are, of course, also models you can self-host, but while also powerful, they are often inferior to the most capable ones, due in part to the necessity to limit model size in order to fit it into the memories typically available on consumer-grade GPUs.
Another factor which I see gaining prominence is the possibility to fine-tune existing models. As of early 2024, many commercial AI model providers make it possible, and this opens new possibilities for creating domain-specific (e.g. origami-oriented) models within a reasonable budget. While fine-tuning has been known much earlier, its broad availability in commercial offerings makes many tasks more feasible in practice.
As AI tools become more powerful, so do the controversies surrounding their use. Many things that were predicted as a theoretical possibility, are becoming practical and cheap quite fast. These include people losing their jobs due to automation and deepfakes being used for political propaganda, or “just” for personalized scams. The dispute over the use of copyrighted materials in Machine Learning is gaining steam, and first attempts at regulating this issue are taking place.
On a lighter note, Sora, a new video generation model released just two days ago, has used a video involving origami airplanes for their main website banner. The airplanes are oddly-shaped, but the demonstration is still impressive.
Revisiting the roadmap for AI in origami, I considered what interesting thing I would do today in this field if I had a little more time, and came up with the following idea: I’d build an origami model recognizer which could provide the name and author of a model given a model’s image. This task is completely feasible, and would provide actual value for the origami community, similarly to what Spot the Creator Facebook group provides. Of course, no system is perfect, so I think we’d still have interesting discussions among human experts for the more difficult cases, but for common models I think the success rate could be quite decent.
The fact such a project is feasible is shown by the existence of Brickognize, a site that recognizes the exact model of a Lego brick given a picture. I happen to have been work colleagues with the author, so I know that while making the project work well required significant effort, it was possible in his spare time, and would be easier today since the tooling available is more advanced.
Most of the effort in such a project boils down to preparing the right set of training data. For an origami recognizer, I can quickly think of several ways of getting such data:
#origami
extracted from Instagram via the API could play a similar role.There are caveats, though: copyright and terms of service. I am not a lawyer, so this would require a better analysis. Certainly, building an origami model recognizer should have the goal of helping the origami community rather than upsetting its members.
As for copyrights, the case for a search engine (and this is what we’re building here) seems much more clear than for e.g. generating images in a particular artist’s style. Since search engines have existed for many years, it seems the rules are mostly clear, and there is much less controversy about a search engine being able to find a piece of information than about generative AI generating new images based on copyrighted material. As for flick and instagram terms of service, one would have to check whether such use would be permitted. Note that many images on flickr are licensed under Creative Commons licences which should make it much easier to identify what kind of use is allowed. Regarding Origami Database, we’d probably have to ask Gilad since the site lacks a clear terms of service page.
This was my quick update on AI in origami. Given how fast things move, I’ll probably be revisiting this topic more in the future.
]]>Adding to my collection of random items with origami motif is this package of tissues featuring a traditional paper boat.
]]>Next Sunday, January 7th, Origami Group Eindhoven will be hosting a New Year’s special workshop. There will be ten workshops in total, with me taking the last slot at 20:00 CET. First, I will say a few words about Shuzo Fujimoto and present some models that he himself folded. Then, we will fold his tessellation, CFW 58. Make sure to bring a hexagon (cut from A4, or similar size) with a 16×16×16 triangle grid. White copy paper or other translucent paper is preferred for best effect when viewing the model in back-light.
]]>October 27th, 2023, will be the 101st anniversary of Shuzo Fujimoto’s birth. For this anniversary, I’m presenting to you a picture of Fujimoto not published before (as far as I know). It was taken at his home, in or around 2012, and shows Shuzo Fujimoto (sitting) with editors of his books for Project F, standing from left to right: Taiko Niwa, Satoko Saito, and Tomoko Fuse. Author of the picture is unknown. I got the image from Satoko Saito, who in turn got it from Taiko Niwa. The quality is very poor, and I couldn’t get a higher resolution version, so the version published here is upscaled with AI (some artifacts are visible).
Last year, we celebrated Fujimoto’s 100th birthday, and that was also the year I did most of my research on Fujimoto’s life and work. In 2023, my research had to slow down, but nonetheless I have a few developments to report:
I wish to thank everyone for the encouragement, additional information, translation help, and research material I have received so far. Even based only on the materials I already have, there is lots of work still waiting to be done.
]]>Designs No 11-15 are not pictured in the book and are only described in a few sentences. Terms such as No 1 that appear in the text refer to other models listed in the book just before the fragment in question. I have translated the above fragment using both Google Translate and Deepl, and was able to figure out how to fold items 11, 13, and 14. However, No 12 and No 15 remain mysterious despite me spending quite a bit of time trying out various folding combinations that might fit the description.
I suspect one of the issues, apart from imperfect automated translation, may be due to problems when recognizing scanned text and converting it to characters (OCR) since the original print is of low quality. It suffices for one stroke to be wrong, and a different character may be recognized, changing the meaning completely.
The automated translations I got (not quoting in order to avoid suggesting anyone trying to translate on their own) implicate that:
The problems with these descriptions are:
I would appreciate any help if you can read Japanese and can help me with translating and understanding the above two sentences. Thank you in advance!
]]>For the workshops, we ordered several kilograms of thick colored papers cut into business-card-sized sheets so that we could play with both shapes and colors. Some people compared it to playing Minecraft. I had the idea for a workshop like this on my mind for a long time, but it was the first time I actually executed it. My main goal was to make it possible for people to join and leave at any moment, and it was possible since the unit is so simple a few minutes are all it takes to get started. I also prepared folding instructions on a flip chart in case someone wanted to fold when there was no one else in the room. It was a good opportunity to present origami to my colleagues, and it seems they enjoyed both the display and the hands-on experience.
]]>One non-standard folding material I tested was fabric used for roller-style window blinds. I received some small samples, which I used to fold a traditional crane and a box lid. Compared to fabric used for e.g. clothes, this one is thicker and stiffer, so I expected that it would be useless for folding. Surprisingly, while it certainly is thicker than most fabric or papers, and has worse memory (trying to unfold), its folding properties exceeded my expectation. Even with the small ca 10×10 cm sheet I was able to fold what I had planned. While you can see that the corners are not sharp and the fabric flows more than it folds, both models kept their shape and neither collapsed nor unfolded. I did not need to use anything like glue or pins to hold the folds in place. This may be due to the fabric being stiffened: it’s made for rolling rather than for being worn. On top of that, the fabric has an interesting texture which I think looks beautiful and gives the models a unique appearance. While this is certainly not a replacement for paper, the fabric was surprisingly good to fold, and if I get a chance to try a larger sheet, I will want to try creating some more complex model out of it.
]]>