Okay, so I'm well read in VB and quite an avid VB programmer but I realize that programming in visual basic isn't very marketable and above all I only use it for fun. I wanted to move to C++ but noticed that it was a LOT harder, and worked completely differently...I don't really know how to handle it but I read up on assembler code and decided that if I'm going to move to a lower language I might as well start with assembler to learn more about how a processor works so that when I move up to C++ it is easier, and I'll probably be able to write way better, way more efficient code. I'm into the basics and read about the registers but there's a thing I don't really understand...
the way that you use functions,
is the only way to really carry out a command to set the AH register to a hex value and then use INT 021h? INT 21 calls a routine from DOS doesn't it...what does that mean exactly...where's a list of said commands...how would I write for anything other than windows if every function is just a DOS routine? Could I write something independent of DOS? Maybe this is obvious and I haven't just read into it enough, not that I started learning assembler like, an hour ago. But some basic tips from people who use the language would be really helpful

oh also, I found this example:

.model small
message   db "Hello world", "$"


main   proc
   mov   ax,seg message
   mov   ds,ax

   mov   ah,09
   lea   dx,message
   int   21h

;   mov   ax,4c00h
   mov   ah,4ch
   mov   al,00h
   int   21h
main   endp
end main

what's the "$" for when declaring the message identifier in the .data segment?

As with programming, to reproduce any software
on an architecture certainly is the ability of low level code.
"I tell my CPU what to do", and it does it,
this is machine language, and represents the actual
executable code of a compiled language,
deriving properties common and machine dependent
according to the architecture.
Without the knowledge of low level programming the actual
implementation of software,
and its communication with the API, hardware, features of CPU,
remain hidden, together with the basic architecture.
Certainly assembly language is fundamental to a thorough
introduction to computer science.
We are hackers, we are coders, with the attempt to understand
and reproduce the effects apparent on a system.
"hack", to do work, a lengthy project, the work cannot
be completed on digital computers, with compilers
slapping out code, one really never begins,
it cannot replace learning coding, nor a true coder.
Why learn assembly? ... there is more than a lengthy reason,
ultimately it teaches you the internal innerworkings of the
machine, this is programming.

I wish to answer any questions I have time for,
to save us both time, and let me make it apparent
you know programming, thats good.

So, I can provide an answer to your other interests
you displayed in your post, for these have their
use in the community in general, and are there
for all those whose cursiosity or desire is set towards
learning, certainly this is where coding differs from
other sciences, it seems the prequisites are a want to
learn, and a desire to study.

The INT instruction transfers control to another executable
segement, the base address of the segment and
a offset within it are set in a
data structure called the IDT, and the INT instruction's
operand acts as a index into this data structure,
its index is multplied by 8 and is added to the base address
in the IDTr (IDT register) by the CPU.
The IDT and the interrupt handlers are initialized by
system software, which determine the current enviroment,
and provide access to routines, the API.
In a 16-bit enviroment, applications have direct access to
hardware, but when it comes to the technical details,
obviously the available API and ISRs are the better
alternative, for many routines.
Since code associated with the IDT is code designed
by OS designers, it certainly differs from that
of another OS, (But with DOS, their are many,
compatible clones).
In x86 assembly the human readable mnemonics,
are similar for the complementary syntax (AT&T or
and thus according to the OS several modifications
will be necessary.
To use it in 32-bit assembly, remember
offsets must be 32-bits, and segment registers
are loaded with selectors which act as an index
to system data structures rather than
the segment's actual base address.
Offsets are 32-bits.
32-bit registers have the 'e' prefix,
and can be used to index into a segment,
we know why ebp is 32-bits it is an offset
within the segment selected by SS,
so on, so forth for other registers which
to do same, CS:EIP, SS:ESP, DS:EAX...
Also according to the run-time enviroment
the offset of your executable code
in the segment selected by CS, may be
different, according to any data placed at
the beginning of the segment by the OS,
or will require the output of a specific
executable format, done through a
Remember 32-bit assembly has added
flexibity and thus a different instruction
format, many limitations which existed
in 16-bit code are extant.
For a transition from 32-bit,
it must use supported instructions,
datatypes, and obey limitations.
For this transition to the target OS on
a similar architecture, interrupts defined
to provide the API, must be used,
with their defined operands.

With the AH=09h of INT 021h,
the offset of the first byte of the string
is placed in DX, the interrupt routine
finds when to stop incrementing the
offset to read in each byte, by
checking the value of the byte at
the current offset to see if it is
the '$' dollar sign, value 024h in ASCII,
which is used as the terminating
character of the string.

As a fellow student, and hacker, I hope
that my writing is of aid,
as all those who have strived to teach,
certainly, without the hard work of all
those out there, students and coders,
many including myself,
would be hedged in the darkness
of inadequate descriptions,
and lengthy strings of documentation.

Good day, and hello world!

This article has been dead for over six months. Start a new discussion instead.