On 21/07/2004, at 11:38 PM, rob@xxxxxxxxx wrote:
A thread can be spawned by a job. The job can end and the thread
continue
to run. The next job can start up and have a conflict. Hence the
reason
that Application Development had this part. (There is a green screen
way
of looking at it that a bunch of propeller heads at IBM favor that
even a
dyed in the wool SEU person would gag at.) Some 'early' versions of
Domino
spawned some of these and short of using one of the tools to clear
them you
had to IPL. The later versions Domino seem to have been pretty much
beaten
into submission.
Threads are not spawned. Spawn starts a job (or a process on other
systems). It starts a Batch Immediate job which is a batch job that
bypasses the job queue. Batch jobs are a pre-requisite for threaded
applications and many/most threaded applications use a spawned job to
start the threaded application but threads themselves are not spawned.
Threads are started using pthread_create() in a job that has the
ALWMLTTHD attribute set to *YES. Only batch jobs can set this
attribute.
Therefore a command that is not threadsafe is not recommended to be
started
from within one of these threads. And I've guessed enough on this, so
coming up with a hypothesis as to why any particular command may not be
threadsafe would really be stretching it.
Commands may not be thread-safe when they presume that a particular
storage location is accessible only by the thread currently active in
the job. In traditional applications there is only ever one thread of
execution (not to be confused with threads themselves) so code can make
such presumptions. In multi-threaded code that is not the case. For
example, a function may return a pointer to static storage allocated
inside itself. If two threads were to call that function at the same
same time time it is quite likely that neither of them would get the
expected results.
Another example could be changing the library list. This is a job-level
attribute which will affect other active threads in that job. It may
also be that the code to implement adding a new library to the list may
not serialise access to the name resolution list part of the job
structure thus opening the possibility for two threads to change the
library list at the same time resulting in a partial or incomplete
change. Neither thread is likely to be happy with the result. Even if
access is serialised the second thread's changes will affect the the
first thread because the library list affects the entire job.
Another issue is commonly called a race condition. One example of this
occurs when Thread A is waiting for a resource held by Thread B and
Thread B is waiting for a resource held by Thread A. Not dissimilar
from record lock issues but requires care and attention to detail to
ensure the condition cannot occur or if it does a graceful recovery is
implemented.
Thread-safety is writing code that ensures one thread does not
adversely affect another thread in the same process.
On 22/07/2004, at 12:39 AM, Joe Pluta wrote:
And this saving and restoring of state is essentially the difference
between a task (or job) and a thread. Saving and restoring state can
be
a very costly operation, and threads are designed to be lightweight,
thus they don't do the state save/restore. And so unless you have a
way
of preventing one thread from interrupting another at a crucial point
in
the code, you can have this sort of problem.
Thread state is saved and restored at the processor level. It couldn't
possibly work any other way. The reason threads are lightweight is not
due to state but rather due to avoidance of job or process start-up
overhead.
Threads run in the same job therefore a new thread does not have to
create a process which would be the case if spawn() or fork() or SBMJOB
were used.
In the olden days, we would actually disable interrupts during sections
of code that were vulnerable. That's okay when you own the CPU, but
these days it's a bit more complicated, and you need to "synchronize"
the method, which cause it to lock something (this something is called
a
semaphore or mutex) while the vulnerable code is executed. That way,
two threads can't walk over each other.
Threads do not have to synchronise for the processor's benefit. That is
handled automatically by the OS--in our cased by SLIC. From SLIC's
perspective there is little difference between a multi-threaded
application and a single-threaded application. On a single processor
system only one thread is running at a time. One a multi-processor
system each processor can be running a different thread. This may be a
thread from a different job or another thread from the same job.
Assume a single processor: Thread one is actively processing, when it
reaches time-slice end it is pre-empted by another thread. The state of
the current thread is saved, the state of the next thread is restored
and the next thread continues running. This happens regardless of
whether the thread is in the same job or another job.
However, this management of state does not help with storage access and
that is what you're really on about. All threads within a given process
share the same address space and there is no protection between
threads. One thread can easily trash another thread's stack or heap
storage. It is access to common storage that mutexes or semaphores are
used to manage, not processor state or register state.
Non-thread-safe code is code that cannot handle reentrancy. These
days,
it's not so much about registers as in my Assembly Language 101 example
above; I'm pretty sure that level of state swapping is handled by the
CPU (although ask me about LPAR and L2 cache some day). Instead, it's
the use of temporary work variables that causes problems (in Java, for
example, instance variables in servlets are not threadsafe). It takes
a
lot of work to make code truly reentrant, and it is especially
difficult
to back engineer reentrancy into code originally written without it,
which is probably a big reason why RPG is not threadsafe without
activation groups.
Thread-safety has little to do with re-entrancy although re-entrancy is
a requirement for thread-safety. Re-entrancy simply means a program has
separate data and code segments thus allowing the code to be shared by
multiple processes without requiring the programmer to handle data
separation. All OS/400 programs are re-entrant--including RPG-- yet not
all programs are thread-safe.
Re-entrancy is why OS/400 programmer's no longer code MRT programs but
code as if only a single user will be using the program. The separation
of data and code is handled by the OS. No extra work by the programmer
at all.
Because RPG uses static storage it is not recursive. Use of static
storage also contributes to why RPG is not thread-safe. The best you
can do is to serialise access to an RPG program from a threaded
application so only one thread in a given job (or process) is running
the code in an RPG program. A thread in a different job can be running
the same RPG program but that is due to re-entrancy, not thread-safety.
Activation groups can be used to solve the recursion problem because
storage is scoped to the activation group therefore a new activation
group gets a new chunk of static storage.
Activation groups alone will not solve the thread-safety problem.
That's because activation groups will not isolate threads within the
same program. That has to be handled by the programmer via Mutexes and
Semaphores (or thread-specific data but that's a clumsy idea). Because
RPG uses static storage itself it cannot be made thread-safe without
changing the class of the storage (e.g., automatic instead of static)
or by serialising access to that storage (and even then serialising
access is problematic because RPG wasn't designed to be used that way).
Thread-safety is implemented by convention: All threads in a job have
to play by the rules and follow the protocol of:
1) Lock a mutex
2) Change common storage
3) Release mutex
If the mutex cannot be locked that indicates some other thread has it
allocated and the thread attempting to gain the lock should wait for it
to be released.
This is a bit like co-operative multi-tasking where applications follow
a protocol to release the CPU to give other applications a chance. It
is up to the application programmer to decide when to release the CPU.
Similarly, it is up to the application programmer to decide when and
how to synchronise and how long to maintain a lock. Any one who has
suffered through Windows (3.x, 95, 98, etc. but not those dialects
based on OS/2 such as NT, W2k, and XP) knows that co-operative
multitasking doesn't work very well. Pre-emptive multi tasking is
better and that is what real operating systems use. Threaded
applications suffer similar problems because they are more dependant on
the capabilities of the programmer than non-threaded applications.
Note that serialising access to storage is necessary only for static
storage (never a good idea with threads because its use effectively
makes using threads impossible), global storage (usually a poor idea
with threads), or external objects like spaces. Automatic storage is
local to the procedure and allocated on each call to that procedure.
Because each thread has its own call stack and associated heap storage
each thread has its own copy of automatic variables.
Want to know more? Read the section on Multithreaded applications in
the Information Centre or buy a decent book on the subject such as:
Programming with Posix Threads
by David R. Butenhof
published by Addison-Wesley
(ISBN 0-201-63392-2)
Regards,
Simon Coulter.
--------------------------------------------------------------------
FlyByNight Software AS/400 Technical Specialists
http://www.flybynight.com.au/
Phone: +61 3 9419 0175 Mobile: +61 0411 091 400 /"\
Fax: +61 3 9419 0175 \ /
X
ASCII Ribbon campaign against HTML E-Mail / \
--------------------------------------------------------------------
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2025 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact
[javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.