[Discuss] Port Scanning

markw at mohawksoft.com markw at mohawksoft.com
Fri Aug 9 18:07:27 EDT 2024


> Dan Ritter said on Tue, 6 Aug 2024 13:03:04 -0400
>
>>The rise of virtual machines and containers is an admission of
>>systemic failure: people gave up on managing dependencies in a
>>sensible manner. Rather than have a deployment system which
>>produces a working program plus libraries and configuration,
>>these systems effectively ship a developer's laptop to the
>>cloud.

This is a serious problem and I think containers are probably the best
solution. This isn't a "failure" per se' it is a feature that has grown
out of control.

Way back in the formless mist of the 1970s and 1980s there was a problem.
Every single program was linked to a some sets of libraries. Each of these
libraries provided functions for strings, file access, and what ever. In
every executable, there was a copy of the binary contents of the library.
If you need to upgrade the library, you needed to upgrade every
application that used that library.

Then, we introduced "shared libraries" where we allow [N] applications to
share 1 copy of a library. This saved a lot of space and a lot of disk
I/O. Since the library contents are now in a separate file, we only need
to update the one shared library file. This was, at the time, a great
solution to the problem of "code reuse" but not "binary reuse."

Then came dependencies. You can't install application W without also
having libraries x,y, and z. So, we came up with package managers that can
keep track of that.

Then came stupid code monkeys who change API in shared libraries. Now,
library [x] upgrades from version 1 to version 2. Idiot code monkey
doesn't care about backward compatibility to version 1. Thus, every
program that is coded to version 1 needs to be ported to version 2. What's
worse, if you installed version 2, it would break all your applications
that link to library [x] assuming version 1.

Then we introduced shared library versioning. This only goes so far
because while library [x] can use library [y] version 1, library [z] needs
library [y] version 2 and two differently versioned libraries can not work
together.

Caveat: Windows DLL's can do this better, windows DLLs can use their own
private versions of other DLLs without them conflicting the process's link
map. As long as internal version dependent structures and allocated memory
are not shared, this works OK, but if you allocate a resource from lib.1
and try to free it from another DLL which uses lib.2 then bad things can
happen.

This is the origin of dependency hell.

One solution is to keep ONLY common system libraries as shared libraries
providing an OS level compatibility and having all other libraries be
statically linked.

In reality, that's not very practical.

So, if we use containers we get the best of all worlds. Applications can
be distributed in a coherent single package that doesn't suffer any
dependency hell. They can have their own configuration files. They can
even be isolated to increase security.

"Containers" are a construct that started with chroot jails. You would
setup a root environment, and then run chroot into it. You would have a
whole environment, different from your host environment and it did pollute
your running system.

Linux containers are based on the LXC project. This has evolved to have a
lot of very neat features like name spaces and different networks and ends
up being (quite surprisingly) light weight.

So, I kind of disagree that this is a failure, it was actually very
successful in the earlier days when hard disks were measured in megabytes
and memory was measured in bytes or kilobytes. Now that a terabyte of disk
space is practically a minimum and RAM is measured in megabytes,
containers work perfectly.

So, when you look at a modern linux and you see ".AppImage" files or
".snap" these are basically containers and they work great.

There was no failure, it was nothing more than constantly evolving
technology adjusting to new realities.




More information about the Discuss mailing list