In the early days of Ink, the most interesting thing Ink programs could do was take some textual input, and output some text back to the terminal. While that was useful fortesting the language, it was far from interesting. So once the basics of the language were up and running, I wanted a way to render images from Ink programs. Aftersome research, I settled on BMP as my file format of choice, and wrote bmp.ink, a tiny BMP image encoder in about ~100 lines of Ink code.
Armed with this new library, Ink could do so many more cool, creatively interesting things, like generate graphs, render charts, and compute a Mandelbrot set into a beautiful graphic (like the one above), all without depending on other external tools. This is the story of why I chose BMP as my file format, how bmp.ink came to be, and why this vintage file format is a diamond in the rough for small toy programming projects.
Like any topicin computing, designing an image file format is a game of tradeoffs. The most popular file formats, like JPG and PNG, optimize for image fidelity, speed, and filesize. Other formats, like SVG, specialize for certain kinds of images like vector graphics. Formats for professional graphics workflows sometimes sacrifice everything else at the cost of image quality and cross-compatibility with other software.
When I set out to write an image encoder in Ink, I knew from the start that the most common formats like JPGand PNG wouldn't be ideal. Both are excellent file formats with decades of research behind them, but encoding JPG and PNG images aren't trivial - they depend on some clever math like discrete cosine transforms and Huffman coding to trade off file format complexity for file size.
But for me, the #1 priority was implementation simplicity. I wanted to build an encoder quickly, so l could get on with building things that used the library to generate interesting images. This meant I needed a format that did as little as possible to compress or transform the original image data, given as a grid of RGB pixel values.
On the other end of the convenience-practicality spectrum are image formats based on text files, like the PPMimage formats. PPM images were designed so they could be shared as plain text files - PPM images store color values in the file for each pixel as strings of numbers. This makes PPMfiles easy to work with in any language that supports robust string manipulation, but because PPMis a more obscure format that never saw widespread general use, not all operating systems and image viewer software supports it. For example, on the Macbook | was working with, the native Preview app couldn't open PPMfiles. I could have used another library or piece of software to translate PPM files to a more popular format like PNG, but that felt unsatisfying, like I was only solving a part of the problem at hand.
Searching for a format that fit the balance I needed between simplicity and compatibility, I found the BMP file format. BMP is a raster image file format, which means it stores color data for individual pixels. What sets BMP apart from other more common formats is that BMP is not a compressed image format - each RGB pixel is stored exactly as a 3-byte chunk of data in the file, and all the pixels of an image are stored sequentially in the file, usually in rows starting from the bottom left of the image. An entire, real-world BMP file is just a big array of pixel data stored this way, prefixed with a small header with some metadata about the image like dimensions and file type.
This format is much simpler than JPG or PNG! It's quite possible for any programmer to sit down and write an encoder that translates a list of RGB values into a BMP file format, because the format is such a straightforward transformation on the raw bitmap data of the image. As a bonus, because BMP images were quite common once, most operating systems and image viewers natively display BMP files (the last image on this post is a BMP file, displayed by your browser).
In the early days of Ink, the most interesting thing Ink programs could do was take some textual input, and output some text back to the terminal. While that was useful for testing the language, it was far from interesting. So once the basics of the language were up and running, I wanted a way to render images from Ink programs. After some research, I settled on BMP as my file format of choice, and wrote bmp.ink, a tiny BMP image encoder in about ~100 lines of Ink code.
Armed with this new library, Ink could do so many more cool, creatively interesting things, like generate graphs, render charts, and compute a Mandelbrot set into a beautiful graphic (like the one above), all without depending on other external tools. This is the story of why I chose BMP as my file format, how bmp.ink came to be, and why this vintage file format is a diamond in the rough for small toy programming projects.
Like any topic in computing, designing an image file format is a game of tradeoffs. The most popular file formats, like JPG and PNG, optimize for image fidelity, speed, and file size. Other formats, like SVG, specialize for certain kinds of images like vector graphics. Formats for professional graphics workflows sometimes sacrifice everything else at the cost of image quality and cross-compatibility with other software.
When I set out to write an image encoder in Ink, I knew from the start that the most common formats like JPG and PNG wouldn’t be ideal. Both are excellent file formats with decades of research behind them, but encoding JPG and PNG images aren’t trivial – they depend on some clever math like discrete cosine transforms and Huffman coding to trade off file format complexity for file size. But for me, the #1 priority was implementation simplicity. I wanted to build an encoder quickly, so I could get on with building things that used the library to generate interesting images. This meant I needed a format that did as little as possible to compress or transform the original image data, given as a grid of RGB pixel values.
On the other end of the convenience-practicality spectrum are image formats based on text files, like the PPM image formats. PPM images were designed so they could be shared as plain text files – PPM images store color values in the file for each pixel as strings of numbers. This makes PPM files easy to work with in any language that supports robust string manipulation, but because PPM is a more obscure format that never saw widespread general use, not all operating systems and image viewer software supports it. For example, on the Macbook I was working with, the native Preview app couldn’t open PPM files. I could have used another library or piece of software to translate PPM files to a more popular format like PNG, but that felt unsatisfying, like I was only solving a part of the problem at hand.
Searching for a format that fit the balance I needed between simplicity and compatibility, I found the BMP file format. BMP is a raster image file format, which means it stores color data for individual pixels. What sets BMP apart from other more common formats is that BMP is not a compressed image format – each RGB pixel is stored exactly as a 3-byte chunk of data in the file, and all the pixels of an image are stored sequentially in the file, usually in rows starting from the bottom left of the image. An entire, real-world BMP file is just a big array of pixel data stored this way, prefixed with a small header with some metadata about the image like dimensions and file type.
This format is much simpler than JPG or PNG! It’s quite possible for any programmer to sit down and write an encoder that translates a list of RGB values into a BMP file format, because the format is such a straightforward transformation on the raw bitmap data of the image. As a bonus, because BMP images were quite common once, most operating systems and image viewers natively display BMP files (the last image on this post is a BMP file, displayed by your browser).
In the early days of Ink, the most interesting thing Ink programs could do was take some textual input, and output some text back to the terminal. While that was useful for testing the language, it was far from interesting. So once the basics of the language were up and running, I wanted a way to render images from Ink programs. After some research, I settled on BMP as my file format of choice, and wrote bmp.ink, a tiny BMP image encoder in about ~100 lines of Ink code.
Armed with this new library, Ink could do so many more cool, creatively interesting things, like generate graphs, render charts, and compute a Mandelbrot set into a beautiful graphic (like the one above), all without depending on other external tools. This is the story of why I chose BMP as my file format, how bmp.ink came to be, and why this vintage file format is a diamond in the rough for small toy programming projects.
Like any topic in computing, designing an image file format is a game of tradeoffs. The most popular file formats, like JPG and PNG, optimize for image fidelity, speed, and file size. Other formats, like SVG, specialize for certain kinds of images like vector graphics. Formats for professional graphics workflows sometimes sacrifice everything else at the cost of image quality and cross-compatibility with other software.
When I set out to write an image encoder in Ink, I knew from the start that the most common formats like JPG and PNG wouldn’t be ideal. Both are excellent file formats with decades of research behind them, but encoding JPG and PNG images aren’t trivial – they depend on some clever math like discrete cosine transforms and Huffman coding to trade off file format complexity for file size. But for me, the #1 priority was implementation simplicity. I wanted to build an encoder quickly, so I could get on with building things that used the library to generate interesting images. This meant I needed a format that did as little as possible to compress or transform the original image data, given as a grid of RGB pixel values.
On the other end of the convenience-practicality spectrum are image formats based on text files, like the PPM image formats. PPM images were designed so they could be shared as plain text files – PPM images store color values in the file for each pixel as strings of numbers. This makes PPM files easy to work with in any language that supports robust string manipulation, but because PPM is a more obscure format that never saw widespread general use, not all operating systems and image viewer software supports it. For example, on the Macbook I was working with, the native Preview app couldn’t open PPM files. I could have used another library or piece of software to translate PPM files to a more popular format like PNG, but that felt unsatisfying, like I was only solving a part of the problem at hand.
Searching for a format that fit the balance I needed between simplicity and compatibility, I found the BMP file format. BMP is a raster image file format, which means it stores color data for individual pixels. What sets BMP apart from other more common formats is that BMP is not a compressed image format – each RGB pixel is stored exactly as a 3-byte chunk of data in the file, and all the pixels of an image are stored sequentially in the file, usually in rows starting from the bottom left of the image. An entire, real-world BMP file is just a big array of pixel data stored this way, prefixed with a small header with some metadata about the image like dimensions and file type.
This format is much simpler than JPG or PNG! It’s quite possible for any programmer to sit down and write an encoder that translates a list of RGB values into a BMP file format, because the format is such a straightforward transformation on the raw bitmap data of the image. As a bonus, because BMP images were quite common once, most operating systems and image viewers natively display BMP files (the last image on this post is a BMP file, displayed by your browser).
The discovery of exoplanets—planets orbiting stars outside our solar system—has revolutionized the search for extraterrestrial life. With over 5,000 confirmed exoplanets, astronomers are focusing on those within the "habitable zone," where conditions might allow liquid water to exist, a crucial ingredient for life as we know it. This project explores the methods used to detect exoplanets, including the transit method, where telescopes observe the slight dimming of a star as a planet passes in front of it, and the radial velocity method, which measures the star's wobble caused by the gravitational pull of orbiting planets. Advanced space telescopes like NASA’s James Webb Space Telescope are now capable of analyzing exoplanet atmospheres, searching for biosignatures—chemical markers such as oxygen, methane, or carbon dioxide that could indicate the presence of life. The project also examines the variety of exoplanet types discovered, from gas giants similar to Jupiter to rocky, Earth-like planets, and how factors such as a planet’s size, atmosphere, and distance from its star influence its potential habitability. While no definitive evidence of extraterrestrial life has been found, each new exoplanet discovery brings us closer to answering one of humanity’s most profound questions: Are we alone in the universe?
Antonio Torres
Epigenetics is the study of changes in gene expression that occur without altering the underlying DNA sequence. This emerging field challenges the classical view of genetics, showing that genes are not the sole determinants of biological traits and disease susceptibility. Instead, epigenetic mechanisms such as DNA methylation and histone modification play a pivotal role in turning genes on or off in response to environmental factors like diet, stress, and toxins. These chemical modifications can affect how tightly DNA is wound around histones, influencing which genes are accessible for transcription. Epigenetic changes are dynamic and can occur throughout an organism’s life, sometimes even being passed onto future generations. This project explores how epigenetic factors contribute to human health, focusing on conditions such as cancer, where abnormal epigenetic regulation can lead to uncontrolled cell growth. It also discusses the potential for therapeutic interventions that target epigenetic changes, offering hope for treating diseases that have a genetic component without directly altering the DNA itself. By understanding the epigenetic code, scientists can better grasp the complex interplay between genes and the environment, revolutionizing our approach to personalized medicine.
Plasma, the fourth state of matter, consists of superheated, ionized gas in which electrons are separated from their nuclei. Though less common on Earth, plasma makes up 99% of the visible universe, found in stars, including our Sun. Understanding plasma behavior is crucial for developing fusion energy, a process where atomic nuclei fuse together, releasing vast amounts of energy. Unlike nuclear fission, which powers today’s nuclear reactors, fusion produces minimal radioactive waste and has the potential to offer an almost limitless supply of clean energy. However, achieving sustained nuclear fusion on Earth requires replicating conditions similar to those in stars—temperatures reaching millions of degrees Celsius, at which hydrogen isotopes can fuse into helium. This project explores the science behind magnetic confinement in fusion reactors like tokamaks, which use powerful magnetic fields to control and stabilize plasma. It also discusses the current challenges in fusion research, including maintaining plasma stability and achieving "break-even" energy output, where the energy produced by fusion equals or exceeds the energy required to sustain the reaction. While fusion energy remains elusive, recent breakthroughs in plasma physics bring humanity closer to harnessing the same power that fuels the stars.
Neutrinos, often called "ghost particles," are among the most elusive and fascinating entities in particle physics. These subatomic particles are incredibly difficult to detect because they rarely interact with matter, passing through virtually everything, including entire planets, without leaving a trace. Neutrinos come in three known flavors—electron, muon, and tau—and can oscillate between these forms as they travel through space. This project explores the importance of neutrinos in the Standard Model of particle physics, where they play a role in weak nuclear interactions and beta decay processes. Despite being nearly massless, neutrinos could hold the key to understanding some of the universe's biggest mysteries, such as why there is more matter than antimatter. Their study is also crucial in astrophysics, as neutrinos provide insight into processes like supernovae and nuclear fusion in stars. Neutrino observatories, such as the IceCube Neutrino Observatory in Antarctica, aim to detect these particles by using massive detectors placed deep underground or under ice, capturing the rare instances when a neutrino interacts with surrounding matter.
Dark matter remains one of the most elusive substances in the universe, yet it accounts for roughly 27% of the total mass and energy content. Unlike normal matter, dark matter doesn’t interact with electromagnetic forces, making it invisible to telescopes and undetectable through emitted light. However, its gravitational influence is observable on the motion of galaxies and the formation of large-scale structures in the universe. This project investigates how dark matter’s gravitational effects, inferred from the rotation curves of galaxies and the cosmic microwave background (CMB) radiation, indicate the presence of an unseen mass that binds galaxies and galaxy clusters together. Without dark matter, the observed velocities of stars in the outer regions of galaxies would suggest they should fly apart, given the insufficient visible mass to produce such gravitational pull. The project also touches on leading candidates for dark matter, such as weakly interacting massive particles (WIMPs), and axions, exploring how current particle physics experiments, including those at the Large Hadron Collider (LHC) and deep underground detectors, aim to identify or even capture dark matter particles. Moreover, through the use of computer simulations, cosmologists have mapped how dark matter helps shape the web-like structure of the universe, providing a framework around which galaxies form. Though its precise nature remains a mystery, dark matter plays a critical role in the universe’s evolution, and understanding it could unravel some of the deepest secrets about the cosmos.
Paint-Interior Living Room Wall Paint-Interior Kitchen Wall Paint-Interior Bedroom 1 Wall Paint-Interior Hallway Bathroom Wall Paint-Interior Pantry Wall Paint-Interior Bedroom 2 Ceiling Paint-Interior Hallway Bathroom Ceiling Paint-Interior Living Room Ceiling Paint-Interior Kitchen Ceiling Paint-Interior Laundry Room Wall Paint-Interior Living Room Ceiling 2nd Layer Paint-Exterior Siding
Germany in Europe. After world. John F. visited 1963, "Ich bin ein Berliner", as "I am donut".
לא לפחד להתאהב
שיישבר הלב
לא לפחד בדרך לאבד
לקום כל בוקר
ולצאת אל החיים
ולנסות הכול לפני שייגמר
לחפש מאיפה באנו
ולחזור בסוף תמיד להתחלה
למצוא בכל דבר עוד יופי
ולרקוד עד שנופלים מעייפות
או אהבה
מכל הרגעים בזמן
למצוא אחד לאחוז בו
להגיד שהגענו
תמיד לזכור לרגע לעצור
ולהודות על מה שיש ומאיפה שבאנו
לחבק אותה בלילה
כשהיא נרדמת
אז כל העולם נרגע
לנשום אותה עמוק
לדעת שתמיד
אני אהיה שם בשבילה
It was the first time he had ever seen someone cook dinner on an elephant
He learned the important lesson that a picnic at the beach on a windy day is a bad idea
He liked to play with words in the bathtub
The sight of his goatee made me want to run and hide under my sister-in-law's bed
Sometimes I stare at a door or a wall and I wonder what is this reality, why am I alive, and what is this all about?
She had convinced her kids that any mushroom found on the ground would kill them if they touched it
The three-year-old girl ran down the beach as the kite flew behind her
It didn't make sense unless you had the power to eat colors
She did not cheat on the test, for it was not the right thing to do
Her hair was windswept as she rode in the black convertible
The door slammed on the watermelon
The book is in front of the table
Sometimes, all you need to do is completely make an ass of yourself and laugh it off to realise that life isn’t so bad after all
Greetings from the real universe
The glacier came alive as the climbers hiked closer
Buried deep in the snow, he hoped his batteries were fresh in his avalanche beacon
She borrowed the book from him many years ago and hasn't yet returned it
The light in his life was actually a fire burning all around him
The water flowing down the river didn’t look that powerful from the car
The crowd yells and screams for more memes
The toddler’s endless tantrum caused the entire plane anxiety
She wanted a pet platypus but ended up getting a duck and a ferret instead
There's a reason that roses have thorns
He knew it was going to be a bad day when he saw mountain lions roaming the streets
Everything was going so well until I was accosted by a purple giraffe.
It was the best sandcastle he had ever seen
The paintbrush was angry at the color the artist chose to use
He didn’t want to go to the dentist, yet he went anyway
I want a giraffe, but I'm a turtle eating waffles
The opportunity of a lifetime passed before him as he tried to decide between a cone or a cup
A kangaroo is really just a rabbit on steroids
She wrote him a long letter, but he didn't read it
I can't believe this is the eighth time I'm smashing open my piggy bank on the same day
The fence was confused about whether it was supposed to keep things in or keep things out
Joe discovered that traffic cones make excellent megaphones
She is never happy until she finds something to be unhappy about; then, she is overjoyed
He felt that dining on the bridge brought romance to his relationship with his cat
Normal activities took extraordinary amounts of concentration at the high altitude
His ultimate dream fantasy consisted of being content and sleeping eight hours in a row
I was starting to worry that my pet turtle could tell what I was thinking
The virus had powers none of us knew existed
Jim liked driving around town with his hazard lights on
He was sitting in a trash can with high street class
The skeleton had skeletons of his own in the closet
Fluffy pink unicorns are a popular status symbol among macho men
There was no telling what thoughts would come from the machine
The irony of the situation wasn't lost on anyone in the room
He always wore his sunglasses at night
award letter packaging financial aid unfortunately package declination 1098-T Subsidized Unsubsidized international success request awarding operations refund academic registration restriction hold review submission upload Powercampus decision DropBox prorate aggregate eligibility immunizations bursar registrar appeal reimbursement disbursement federal regulations appeal scholarship recommendation origination deferral verification declaration ensemble accounting extension Master Promissory Note Entrance Counseling statement Non-custodial parent profile Inceptia Gallagher petition transfer credits transcript
Concretely, what this means is that if you push a 100MB file to your git repo, everybody else who ever clones that repo will also have to download your 100MB file and store on disk/in their git repo for the rest of eternity. This is true even if you immediately delete it in the next commit! It just sticks around forever, as unwanted dead weight.
But, this didn't satisfy my curiosity. It was a great high-level explanation, but I wanted to understand what was going on, one level deeper.
The Git object model
To understand exactly what's going on, it helps to understand how git works. There are 3 data structures in git: blobs, trees and commits.
How commits, trees and blobs all fit together. From https://www.keypuncher.net/blog/git-5.
Blobs are individual files, stored in your .git/objects directory (which git calls the 'object database'). They’re not stored by filename, but instead by hash. What this means is that if you have two files named “text1.txt” and “text2.txt”, but they both contain the word “hello”, then you’ll only have one entry in the database: “hello”.
A fun and counterintuitive fact about blobs is that if you have one file, but you update it, git will store two blobs: one for the old version of the file and one for the new version of the file. And these aren’t diffs or deltas: git hashes and stores the entire file contents of the two file versions.
An example of how this is all implemented, from Git's documentation.
(This isn’t as horrifically inefficient as it seems, because Git does some compression along the way. You can read more about it by googling 'packfiles'.)
Git associates filenames to blobs using structures called trees, which store pointers to blobs and other trees. A commit is just a pointer to a specific tree. And a branch is basically a pointer to a commit.
This object database exists independently of commits and branches, because it is the thing that store the information about them. But what that means is that when you add a large file, even if it’s just to a side branch, you create a new entry in the object database for the rest of eternity.
Because the object database is independent of branches, so there's no real way to isolate your change. So everyone else who ever uses your repo will need to download your file changes, and that's why everyone is sad when you commit large files to Git.
What to do?
And yet the need exists: sometimes you do just need to move files between local and remote computers. If you're in this boat, your best options are:
scp: the classic. Getting the syntax and filenames right can be a bit of a pain, but on the whole scp works very well for individual files.
rsync: a better alternative, which does smart diffing if there are multiple files to transfer. This is what we use for Moonglow's file syncing.
Git-LFS is a separate, cool approach that doesn't quite solve this problem. You might know it because of Huggingface, which is basically Git-LFS-as-a-service.
Instead of storing large file contents as objects in the blob database, it store them in a cloud server that you set up, and puts a link to that in the blob database. This means the large file only gets pulled if it's needed. For repos that need to systematically store files that you expect everyone using the repo to need, Git-LFS is a very good option. But for one-off file syncs it's wasteful, as you'll still be adding a one-off file to a large, shared repo.
If you liked this, give Moonglow a try! It lets you start and stop GPUs instances on AWS and Runpod, and integrates with VSCode so that you can connect iPython notebooks to them without leaving your editor. We give you $5 of free GPU credit when you sign up.
Subscribe to Moonglow Blog: tech notes for Jupyter notebook users
Sign up to get new updates.
jamie@example.com
Subscribe
Moonglow Blog: tech notes for Jupyter notebook users © 2024
Sign up
Try Moonglow for free
Concretely, what this means is that if you push a 100MB file to your git repo, everybody else who ever clones that repo will also have to download your 100MB file and store on disk/in their git repo for the rest of eternity. This is true even if you immediately delete it in the next commit! It just sticks around forever, as unwanted dead weight.
But, this didn't satisfy my curiosity. It was a great high-level explanation, but I wanted to understand what was going on, one level deeper.
The Git object model
To understand exactly what's going on, it helps to understand how git works. There are 3 data structures in git: blobs, trees and commits.
How commits, trees and blobs all fit together. From https://www.keypuncher.net/blog/git-5.
Blobs are individual files, stored in your .git/objects directory (which git calls the 'object database'). They’re not stored by filename, but instead by hash. What this means is that if you have two files named “text1.txt” and “text2.txt”, but they both contain the word “hello”, then you’ll only have one entry in the database: “hello”.
A fun and counterintuitive fact about blobs is that if you have one file, but you update it, git will store two blobs: one for the old version of the file and one for the new version of the file. And these aren’t diffs or deltas: git hashes and stores the entire file contents of the two file versions.
An example of how this is all implemented, from Git's documentation.
(This isn’t as horrifically inefficient as it seems, because Git does some compression along the way. You can read more about it by googling 'packfiles'.)
Git associates filenames to blobs using structures called trees, which store pointers to blobs and other trees. A commit is just a pointer to a specific tree. And a branch is basically a pointer to a commit.
This object database exists independently of commits and branches, because it is the thing that store the information about them. But what that means is that when you add a large file, even if it’s just to a side branch, you create a new entry in the object database for the rest of eternity.
Because the object database is independent of branches, so there's no real way to isolate your change. So everyone else who ever uses your repo will need to download your file changes, and that's why everyone is sad when you commit large files to Git.
What to do?
And yet the need exists: sometimes you do just need to move files between local and remote computers. If you're in this boat, your best options are:
scp: the classic. Getting the syntax and filenames right can be a bit of a pain, but on the whole scp works very well for individual files.
rsync: a better alternative, which does smart diffing if there are multiple files to transfer. This is what we use for Moonglow's file syncing.
Git-LFS is a separate, cool approach that doesn't quite solve this problem. You might know it because of Huggingface, which is basically Git-LFS-as-a-service.
Instead of storing large file contents as objects in the blob database, it store them in a cloud server that you set up, and puts a link to that in the blob database. This means the large file only gets pulled if it's needed. For repos that need to systematically store files that you expect everyone using the repo to need, Git-LFS is a very good option. But for one-off file syncs it's wasteful, as you'll still be adding a one-off file to a large, shared repo.
If you liked this, give Moonglow a try! It lets you start and stop GPUs instances on AWS and Runpod, and integrates with VSCode so that you can connect iPython notebooks to them without leaving your editor. We give you $5 of free GPU credit when you sign up.
Subscribe to Moonglow Blog: tech notes for Jupyter notebook users
Sign up to get new updates.
jamie@example.com
Subscribe
Moonglow Blog: tech notes for Jupyter notebook users © 2024
Sign up
Try Moonglow for free
El otro día estaba yo jugando una tremenda partida de Fortnite con mi colega Manel y me hice un team de 3 yo solito. El problema es que uno de ellos llevaba el escudo del capitán amierica, y voy yo y digo "lo pillo y Victoria Royale", pero el ratilla de Manel lo pilló y no me lo quiso dar. Ahora ya no le hablo ni lo llevo en coche a trabajar.
aergaegaegaegeagergergergergerge
Jon is observing Mance's host, taking note of all the giants and mammoths that make up the army. Tormund is telling Jon some tall tales about himself, when the eagle that was once Orell rakes Jon's face. Rattleshirt has arrived to bring Jon before Mance, this time at the Fist of the First Men.
The king confronts Jon about how many men were at the Fist, and who led. Jon, seeing how many of the Watch died here and realizing Mance may kill him for lying, tells the truth. The situation is still tense. Jon feels that Mance may still have him killed because Jon lied to him previously.
Briefly he thinks about attacking Mance but Ygritte saves him by telling them that they are lovers. The wildlings respect any man who steals his woman, and Rayder informs Jon that he will be leaving with Styr and Jarl on the morrow to climb the Wall. That night, Jon and Ygritte share a bed together.
Coordinar, supervisar y evaluar el correcto y oportuno registro, almacenamiento, sistematización, control, seguimiento y administración de la información que deba ser incorporada al sistema informático nacional interoperable, previsto en el artículo 38 de la Ley, y al Banco Nacional de Datos Forenses, así como a cualquier otro sistema informático que sea de observancia obligatoria, en el ámbito de su competencia, para que permita conocer con certeza la situación que guarda el fenómeno de los mercados criminales, así como realizar estudios criminógenos y geo delictivos que permitan coadyuvar con los actos de investigación, combate al delito y definir políticas en materia de procuración de justicia, conforme a las directrices que emita la persona titular de la Agencia de Investigación Criminal.