Guide

User Manual:

Open the PDF directly: View PDF PDF.
Page Count: 24

DownloadGuide
Open PDF In BrowserView PDF
The Computer Nonsense Guide
Or the influence of chaos on reason! 31.08.2018

Abstract
"My aim is: to teach you to pass from disguised nonsense to something that is patent nonsense." — Ludwig Wittgenstein
This guide is product of the efforts of many people too numerous to list here and of the unique environment of the Space Beam.
The software environment and operating-system-like parts contain many things which are still in a state of flux. This work confines
itself primarily to the stabler parts of the system, and does not address the window system, user interface or application programming
interfaces at all.
We are an open-source research & development community that conducts multidisciplinary work on distributed systems, artificial
intelligence and high-performance computing.
Our Mission: is provide tools inside a simple workspace for play, work and science through observation and action.
Our Goal: a distributed AI toolkit and workspace environment for machines of all ages!
We make a custom Debian workspace that anyone can use today, these things work together with native support for Python 3,
LuaLang and BEAM ecosystems.

Core ideas
Functions are a form of objects.
Message passing and function calling are analogous.
Asynchronous message passing is necessary for non-blocking systems.
Selective receive allow to ignore messages uninteresting now.

Prerequisites
It is assumed that the reader has done some programming and is familiar with data types and programming language syntax.

2

Introduction
"An object is really a function that has no name and that gets its argument a message and then look at that message and decide what
to do next." — Richard P. Gabriel
About 100 years ago there were two schools of thought a clash between two paradigms for how to make an intelligent system, one
paradigm was mathematical logic if I give you some true premises and some valid rules of inference you can derive some truth
conclusions and people who believe in logic thought that's the way the mind must work and somehow the mind is using some funny
kind of logic that can cope with the paradox of the liar or the fact that sometimes you discover things you believed were false.
Classical logic has problems with that and the paradigm said we have these symbolic expressions in our head and we have rules from
repairing them and the essence of intelligence is reasoning and it works by moving around symbols in symbolic expressions.
There was a completely different paradigm that wasn't called artificial intelligence it was called neural networks that said we known
about an intelligent system it's the mammalian brain and the way that works is you have lots of little processes with lots of connections
between them and you change the strengths of the connections and that's how you learn things so they thought the essence of
intelligence was learning and in particular how you change the connection strengths so that your neural network will do new things and
they would argue that everything you know comes from changing those connection strengths and those changes have to somehow be
driven by data you're not programmed you somehow absorb information from data, well for 100 years this battle has gone on and
fortunately today we can tell you recently it was won.

3

Lua in Erlang
"Scripting is a relevant technique for any programmer's toolbox." — Roberto Ierusalimschy
Luerl is an implementation of standard Lua 5.2 written in Erlang/OTP.
Lua is a powerful, efficient, lightweight, embeddable scripting language common in games, IoT devices, machine learning and scientific
computing research.
It supports procedural, object-oriented, functional, data-driven, reactive, organizational programming and data description.
Being an extension language, Lua has no notion of a "main" program: it works as a library embedded in a host. The host program can
invoke functions to execute a piece of Lua code, can write and read Lua variables, and call Erlang functions by Lua code.
Luerl is a library, written in clean Erlang/OTP. For more information, check out the get started tutorial. You may want to browse the
examples source code.

Luerl goal
A proper implementation of the Lua language
It SHOULD look and behave the same as Lua 5.2
It SHOULD include the Lua standard libraries
It MUST interface well with Erlang

Embedded language
Lua is an embeddable language implemented as a library that offers a clear API for applications inside a register-based virtual machine.
This ability to be used as a library to extend an application is what makes Lua an extension language.
At the same time, a program that uses Lua can register new functions in the Luerl environment; such functions are implemented in
Erlang (or another language) and can add facilities that cannot be written directly in Lua. This is what makes any Lua implementation an
extensible language.
These two views of Lua (as extension language and as extensible language) correspond of two kinds of interaction between Erlang and
Lua. In the first kind, Erlang has the control and Lua is the library. The Erlang code in this kind of interaction is what we call application
code.
In the second kind, Lua has the control and Erlang is the library. Here, the Erlang code is called library code. Both application code and
library code use the same API to communicate with Lua, the so called Luerl API.
Modules, Object Oriented programming and iterators need no extra features in the Lua API. They are all done with standard
mechanisms for tables and first-class functions with lexical scope.
Exception handling and code load go the opposite way: primitives in the API are exported to Lua from the base system C, JIT, BEAM.
Lua implementations are based on the idea of closures, a closure represents the code of a function plus the environment where the
function was defined.
Like with tables, Luerl itself uses functions for several important constructs in the language. The use of constructors based on
functions helps to make the API simple and general.

The result
Luerl is a native Erlang implementation of standard Lua 5.2 written for the BEAM ecosystem.
Easy for Erlang to call
Easy for Lua to call Erlang
Erlang concurrency model and error handling
Through the use of the BEAM languages, Luerl can be augmented to cope with a wide range of different domains, creating a
customized language sharing a syntactical framework.

4

Lisp Flavoured Erlang
"Lisp: Good News Bad News How to Win Big ™." — Richard P. Gabriel
LFE is a proper Lisp based on the features and limitations of Erlang's virtual machine, attuned to vanilla Erlang and OTP it coexists
seamlessly with the rest of the BEAM ecosystem.
Some history: Robert Virding tried Lisp 1 but it didn't really work, Lisp 2 fits the BEAM better so LFE is Lisp 2+, or rather Lisp 3?
For all of us in general the bad news is that almost everything that what we have been using is WORNG! no one can denied the respect
that Black Mesa Research deserves but even with it's limited concurrency implemented by a CSP model with monads or static types
have yet it classical boundaries; the λ-calculus low concurrent ceiling. [T. Church]
We were not that into Lisp until reading some tweets from certain no-horn viking, the short story is that Scheme is Lisp 2, the goal of
Scheme was to implement a Lisp following the actor model but they discover closures instead got hyped with them and forget about
Hewitt.
Erlang was born to the world on Stockholm Sweden in 1998, Jane Walerud, Bjarne Däcker, Mike Williams, Joe Armstrong and Robert
Virding open-source a language that implement this model of universal computation based in physics without even know or care much
about it, just pure engineering powers and a great problem to solve.
It's a language out of a language out of Sweden that can be used to build web scale, asynchronous, non-blocking, event driven,
message passing, NoSQL, reliable, highly available, high performance, real time, clusterable, bad ass, rock star, get the girls, get the
boys, impress your mom, impress your cat, be the hero of your dog, AI applications.
It's Lisp, you can blast it in the face with a shotgun and it keeps on coming.

LFE goal
An efficient implementation of a "proper" Lisp on the BEAM with seamless integration for the Erlang/OTP ecosystem.

The result
A New Skin for the Old Ceremony where the thickness of the skin affects how efficiently the new language can be implemented and
how seamlessly it can interact.

5

Why Lisp 3?
"Lisp is the greatest single programming language ever designed." — Alan Kay
A lot has changed since 1958, even for Lisp it now has even more to offer.
It's a programmable programming language
As such, it's excellent language for exploratory programming.
Due to it's venerable age, there is an enormous corpus of code and ideas to draw from.
Overall, the evolution of Lisp has been, guided more by institutional rivalry, one-upmanship, and the glee born of technical cleverness
characteristic of the hacker culture than by sober assessment of technical requirements.

Lisp 1
Early thoughts about a language that eventually became Lisp started in 1956 when John McCarty attended the Dartmouth Summer
Research Project on Artificial Intelligence.
The original idea was to produce a compiler, but in the 50's this was considered a major undertaking, and McCarthy and his team
needed some experimenting in order to get good conventions for subroutine linking, stack handling and erasure.
They started by hand-compiling various functions into assembly language and writing subroutines to provide a LISP environment.
They decided on garbage collection in which storage is abandoned until the free storage list is exhausted, the storage accessible from
program variables and the stack is marked, so the unmarked storage is made into a new free storage list.
At the time was also decided to use SAVE and UNSAVE routines that use a single contiguous public stack array to save the values of
variables and subroutine return addresses in the implementation of recursive subroutines.
Another decision was to give up the prefix and tag parts of the message, this left us with a single type an 15 bit address, so that the
language didn't require declarations.
These simplifications made Lisp into a way of describing computable functions much neater than the Turing machines or the general
recursive definitions used in recursive function theory.
The fact that Turing machines constitute an awkward programming language doesn't much bother recursive function theorists, because
they almost never have any reason to write particular recursive definitions since the theory concerns recursive functions in general.
Another way to show that Lisp was neater than Turing machines was to write a universal LISP function and show that it is briefer and
more comprehensible than the description of a universal Turing Machine.
This refers to the Lisp function eval(e,a) which computes the value of a Lisp expression e, the second argument a being a list of
assignments of values to variables, a is needed to make the recursion work.

Lisp 2
The Lisp 2 project was a concerted language that represented a radical departure from Lisp 1.5.
In contrast to most languages in which the language is first designed and then implemented Lisp 2 was an implementation in search of
a language, in retrospect we can point out that was searching from one out of Sweden.
The earliest known LISP 2 document is a one-page agenda for a Lisp 2 Specifications Conference held by the Artificial Intelligence
Group at Standford. Section 2 of this agenda was:

Proposals for Lisp 2.0
Linear Free Storage
Numbers and other full words
Auxiliary Storage
Input language, infix notation.
Arrays
Freer output format
Sequence of implementation
Comments

6

Documentation and maintenance
Hash Coding
Sobroutine linkage
Storage conventions
Effect of various I/O apparatus
Interaction with programs in other languages
Expressions having property lists

The Actor Model
Actors are the universal primitive of concurrent digital computation. In response to a message that it receives, an actor can make local
decisions, create more Actors, send more messages, and designate how to respond to the next message received.
Unbounded nondeterminism is the property that the amount of delay in servicing a request can become unbounded as a result of
arbitration of contention for shared resources while still guaranteeing that the request will eventually be serviced.
Arguments for unbounded nondeterminism include the following:
There is no bound that can be placed on how long it takes a computational circuit called an Arbiter to settle.
Arbiters are used in computers to deal with the circumstance that computer clocks operate asychoronously with input from
outside, "e.g, keyboard input, disk access, network input, etc..."
So it could take an unbounded time for a message to sent to a computer to be received and in the meantime the computer could
traverse an unbounded number of states.
The following were the main influences on the development of the actor model of
computation:
The suggestion by Alan Kay that procedural embedding be extended to cover
data structures in the context of our previous attempts to generalize the work by
Church, Landin, Evans, and Reynolds on "functional data structures."
The context of our previous attempts to clean up and generalize the work on coroutine control structures of Landin, Mitchell,
Krutar, Balzer, Reynolds, Bobrow-Wegbreit, and Sussman.
The influence of Seymour Papert's "little man" metaphor for computation in LOGO.
The limitations and complexities of capability-based protection schemes. Every actor transmission is in effect an inter-domain
call efficiently providing an intrinsic protection on actor machines.
The experience developing previous generations of PLANNER. Essentially the
whole PLANNER-71 language (together with some extensions) was implemented by Julian Davies in POP-2 at the University of
Edinburgh.
In terms of the actor model of computation, control structure is simply a pattern of passing messages.
We have quoted Hewitt at length because the passage illustrates the many connections among different ideas floating around in the AI,
Lisp, and other programming language communities; and because this particular
point in the evolution of ideas represented a distillation that soon fed back quickly and powerfully into the evolution of Lisp itself.

Logic and λ-calculus
Logic programming is the proposal to implement systems using mathematical logic.
Perhaps the first published proposal to use mathematical logic for programming was John McCarthy's Advice Taker paper.
Planner was the first language to feature "procedural plans" that were called by "pattern-directed invocation" using "goals" and
"assertions". A subset called Micro Planner was implemented by Gerry Sussman, Eugene Chariak and Terry Winograd and was used in
Winograd's natural language understanding program SHRDLU, and some other projects.
This generated a great deal of excitement in the field of AI. It also generated controversy because it proposed an alternative to the logic
approach one of the mainstay paradigms for AI.
The upshot is that the procedural approach has a different mathematical semantics based on the denotation semantics of the Actor
model from the semantics of mathematical logic.
There were some surprising results from this research including that mathematical logic is incapable of implementing general
concurrent computation even though it can implement sequential computation and some kinds of parallel computation including the
lambda calculus.

7

Classical logic blows up in the face of inconsistent information that is kind of ubiquitous with the growth of the internet.
This change enables a new generation of systems that incorporate ideas from mathematical logic in their implementation, resulting on
some reincarnation of logic programming. But something is often transformed when reincarnated!

A limitation of logic programming
In his 1988 paper on early history of Prolog, Bob Kowalski published the thesis that "computation is controlled deduction" which he
attributed to Pat Hayes.
Contrary to Kowalski and Hayes, Hewitt's thesis was that logical deduction was incapable of carrying out concurrent computation in
open systems because of indeterminacy in the arrival order of messages.

Indeterminacy in concurrent computation
Hewitt and Agha [1991] argued that: The Actor model makes use of arbitration for determining which message is next in the arrival
ordering of an Actor that is sent multiple messages concurrently.
For example Arbiters can be used in the implementation of the arrival ordering of an Actor which is subject to physical indeterminacy in
the arrival order.
In concrete terms for Actor systems typically we cannot observe the details by which the arrival order of messages for an Actor is
determined. Attempting to do so affects the results and can even push the indeterminacy elsewhere.
Instead of observing the internals of arbitration processes of Actor computations, we await outcomes.
Physical indeterminacy in arbiters produces indeterminacy in Actors. The reason that we await outcomes is that we have no alternative
because of indeterminacy.
According to Chris Fuchs [2004], quantum physics is a theory whose terms refer predominately to our interface with the world. It is a
theory not about observables, not beables, but about 'dingables' we tap a bell with our gentle touch and listen for its beautiful ring.
It is important to distinguish between indeterminacy in which factors outside the control of an information system are making decision
and choice in which the information system has some control.
It is not sufficient to say that indeterminacy in Actor systems is due to unknown/unmodeled properties of the network infrastructure.
The whole point of the appeal to quantum indeterminacy is to show that aspects of Actor systems can be unknowable and the
participants can be entangled.
The concept that quantum mechanics forces us to give up is: the description of a system independent from the observer providing
such a description; that is the concept of the absolute state of a system. I.e, there is no observer independent data at all.
According to Zurek [1982], "Properties of quantum systems have no absolute meaning. Rather they must always characterized with
respect to other physical systems."
Does this mean that there is no relation whatsoever between views of different observers? Certainly not. According to Rovelli [1996] "It
is possible to compare different views, but the process of comparison is always a physical interaction (and all physical interactions are
quantum mechanical in nature)".

Lisp 3
Lisp Flavored Erlang (LFE) is a functional, concurrent, general-purpose programming language and Lisp dialect built on top of Core
Erlang and the Erlang Virtual Machine (BEAM).

What isn't
It
It
It
It

isn't
isn't
isn't
isn't

an implementation of
an implementation of
an implementation of
an implementation of

Maclisp
Scheme
Common Lisp
Clojure

What is
LFE is a proper Lisp based on the features and limitations of the Erlang VM (BEAM).
LFE coexists seamlessly with vanilla Erlang/OTP and the rest of the BEAM ecosystem.

8

LFE runs on the standard Erlang Virtual Machine (BEAM).
The object-oriented programming style used in the Smalltalk and Actor families of languages is available in LFE and used by the
Monteverde HPC package system. Its purpose is to perform generic operations on objects. Part of its implementation is simply a
convention in procedural-calling style: part is a powerful language feature, called flavors, for defining abstract objects.

Lisp Machine flavors
When writing a program, it is often convenient to model what the program does in term of objects, conceptual entities that can be
likened to real-world things.
Choosing what objects to provide in a program is very important to the proper organization of the program.
In an object-oriented design, specifying what objects exist is the first task in designing the system.
In an electrical design system, the objects might be "resistors", "capacitors", "transistors", "wires", and "display windows".
After specifying what objects there are, the next task of the design is to figure out what operations can be performed on each object.
In this model, we think of the program as being built around a set of objects, each of which has a set of operations that can be
performed on it.
More rigorously, the program defines several types of object, and it can create many instances of each type.
The program defines a set of types of object and, for each type, a set of operations that can be performed on any object of that type.
The new types may exist only in the programmer's mind. For example, it is possible to think of a disembodied property list as an
abstract data type on which certain operations such as get and put are defined.
This type can be instantiated by evaluating this form you can create a new disembodied property lists are really implemented as lists,
indistinguishable from any other lists, does not invalidate this point of view.
However, such conceptual data types cannot be distinguished automatically be the system; one cannot ask "is this object a
disembodied property list, as opposed to an ordinary list".
We represent our conceptual object by one structure.
The LFE flavors we use for the representation has structure and refers to other Lisp objects.
The object keeps track of an internal state which can be examined and altered by the operations available for that type of object, get
examines the state of a property list, and put alters it.
We have seen the essence of object-oriented programming. A conceptual object is modeled by a single Lisp object, which bundles up
some state information. For every type there is a set of operations that can be performed to examine or alter the object state.

9

Application containers
"Awaken my child, and embrace the glory that is your birthright." — The Overmind
Singularity: Application containers for Linux enables full control of the environment on whatever host you are on. This include
distributed systems, your favorite blockchain, HPC centers, microservices, GPU's, IoT devices, docker containers and the whole
computing enchilada.
Containers are used to package entire scientific workflows, software libraries, and datasets.
Did you already invest in Docker? The Singularity software can import your Docker images without having Docker installed or being a
superuser.
As the user, you are in control of the extent to which your container interacts with its host. There can be seamless integration, or little
to no communication at all.
Reproducible software stacks: These must be easible verifiable via cheksum or cryptographic signature in such a manner that
does not change formats. By default Singularity uses a container image file which can be checksummed, signed and easily
verified.
Mobility of compute: Singularity must be able to transfer (and store) containers in a manner that works with stadandard data
mobility tools and protocols.
Compatibility with complicated architectures: The runtime must be compatible with existing HPC, scientific, compute farm and
enterprise architectures maybe running legacy vintage systems which do not support advanced namespace features.

Linux containers
A Unix operating system is broken into two primary components, the kernel space, and the user space. The kernel supports the user
space by interfacing with the hardware, providing core system features and creating the software compatible layers for the user space.
The user space on the other hand is the environment that most people are most familiar with interfacing with. It is where applications,
libraries and system services run.
Containers shift the emphasis away from the run-time environment by commoditizing the user space into swappable components. This
means that the entire user space portion of a Linux operating system, including programs, custom configuration, and environment can
be interchangeable at run-time.
Software developers can now build their stack onto whatever operating system base fits their needs bets, and create distributable runtime encapsulated environments and the users never have to worry about dependencies, requirements, or anything else from the user
space.
Singularity provides the functionality of a virtual machine, without the heavyweight implementation and performance costs of emulation
and redundancy!

Container instances
Singularity "container instances" allow you to run services (e.g. Nginx, Riak, PostgreSQL, etc...) a container instance, simply put, is a
persistent and isolated version of the container image that runs in the background.

Important notes
The instances are linked with your user. So if you start an instance with sudo, that is going to go under root, you will need to call sudo
singularity instance.list in order to see it.

10

Live for the swarm!
"Send colonies to one or two places, which may be as keys to that state, for it is necessary either to do this or else to keep there a
great number of lings and hydras." — Sarah Kerrigan
We present Blueberry a TorchCraft bot system build for online competition and AI research on the real-time strategy game of StarCraft;
ours is a message-passing, asynchronous system that exploits the hot swap loading and parallelism of Luerl and the concurrency of
the BEAM VM.
StarCraft serve as an interesting domain for Artificial Intelligence (AI), since represent a well defined complex adversarial environment
which pose a number of interesting challenges in areas of information gathering, planning, dealing with uncertainty, domain knowledge
exploitation, task decomposition, spatial reasoning, and machine learning research.
Unlike synchronous turn-based games like chess and go, StarCraft games are played in real-time, the state continue to progress even
if no action is taken, actions must decided in fractions of a second, game frames issue simultaneous actions to hundreds of units at
any given time, players only get the information about what they units observe, there is a fog of information present in the environment
and hidden units that require additional detection.

Core ideas
StarCraft is about information, the smoke of rare weeds and silver for tools.
Strong units vs mobile units.
Defense units are powerful but immobile, offense units are mobile but weak.
Efficiency is not the number one goal.

Stages of a game
Early, Make/defend a play & send colonies to one or two bases.
Middle, Core units, make/defend pressure & take a base.
Late, Matured core units, multi-pronged tactics & take many bases.
Final, The watcher observes, the fog collapses an event resolves.
Information, colonies, improved economy for better tools.

11

What is an organization?
"ORGs is a paradigm in which people are tightly integrated with information technology that enables them to function with an
organizationally relevant task or problem." — Carl Hewitt
A monkey, a building, a drone: each is a concrete object and can be easily identified. One difficulty attending the study of organizations
is that an organization is not as readily visible or describable.
Exactly what is an organization such as a business concern? It is a building? A collection of machinery? A legal document containing a
statement of incorporation? It is hardly likely to be any of these by itself. Rather, to describe an organization requires the consideration
of a number of properties it possesses, thus gradually making clear, or at least clearer, that it is.
The purposes of the organization, whether it is formal or informal, are accomplished by a collection of members whose efforts or to use
a term to be employed throughout this work, behavior are so directed that they become coordinated and integrated in order to attain
sub-goals and objectives.

Perception and behavior
All of us at some point or another have had the experience of watching another person do something or behave in a certain way, saying
to ourselves, "She/he acts as if she/he thought, ... " and then filling in some supposition about the way the other person looked at
things.
Simple as the statement "He acts as if he thought ... " may be, it illustrates two important points.
First, what the person thinks he sees may not actually exist. They could act as if changes in methods as an attempt by management
to exploit them.
As long as they had this attitude or belief, any action by management to change any work method would be met, at the very least, with
suspicion and probably with hostility.
The second point is that people act on the basis of what they see. In understanding behavior, we must recognize that facts people do
not perceive as meaningful usually will not influence their behavior, whereas the things they believe to be real, even though factually
incorrect or nonexistent, will influence it.
Organizations are intended to bring about integrated behavior. Similar, or at least compatible, perceptions on the part of organizational
members are therefore a matter of prime consideration.

Clues
One of the first things we must recognize is that in learning about things we not only learn what they are, that is, that the round white
object is for football, but we also learn what these things mean, that is, football is a sport that the USA men's team don't get and their
woman counterpart have master perfectly.
Upon receiving a signal (sight of football) we perform an interpretative step by which a meaning is attached to it.
Many of these "meanings" are so common and fundamental in our understanding of the world that we fail to note them except under
unusual circumstances.
One way these meanings are brought home to us is by meeting people from countries different from our own; many of the meanings
which things have come from our culture, they are things all people within the culture share.
These common interpretations of things help enormously in communicating, but they sometimes make it difficult to set factors in
perspective so that we can really understand the reasons for behavior.

Threshold of perception
We all, have certain things (stimuli) to which we are sensitized and that when these appear we are instantly alert and eager to examine
them.
There are other stimuli of relative unimportant to us to which we do not pay as much attention and may, in effect, actually block out.
One way of viewing this subject is to suggest that we have thresholds or barriers which regulate what information from the outside world
reaches our consciousness.

12

On some matters the barriers are high and we remain oblivious to them, but on others which are quite important to us we are sensitized
and, in effect, we lower the barrier, permitting all the information possible concerning these matters to reach our consciousness.

Resonance
Related to this idea of sensitivity and selectivity is a phenomenon that might be called resonance.
Through experience and what we see ourselves to be, the understanding of a particular item of information may be very similar to that
of others.
It is explained this way: since all the people inside a group look upon themselves as peers, they know what a change on the individual
means in annoyance and inconvenience.
They can easily put themselves into his shows and, once having done so, probably feel almost as disturbed as he might be.

Internal consistency
One property of the images formed of the world around us is that they are reasonable, or internally consistent.
For instance, we may look at some draw on a page and see a rabbit. One portion along these lines might suggest a duck, but we do not
have an image of something half rabbit and half duck.
In fact, if our first impression is of a duck, we may never notice that a portion looks like a rabbit.
We seem to tune out the elements that do not fit.

Dealing with conflict
Organizations that posses the capacity to deal adequately with conflict have been described as follows:
1. They posses the machinery to deal constructively with conflict. They have an structure which facilitates constructive interaction
between individuals and work groups.
2. The personnel of the organization is skilled in the process of effective interaction and mutual influence (skills in group ledership
and membership roles in group building and maintenance functions).
3. There is a high confidence and trust in one another among members of the organization, loyalty to the work group and to the
organization, and high motivation to achieve the organization's objectives.
Confidence, loyalty, and cooperative motivation produce earnest, sincere, and determined efforts to find solutions to conflict. There is
greater motivation to find constructive solution that to maintain an irreconcilable conflict. The solutions reached are often highly creative
and represent a far better solution than any initially proposed by the conflicting interests.
The essence here is that out of conflict will come a new synthesis superior to what existed before and perhaps superior to any
individual point of view existent in conflict.
Conflict, resting in part on different perspectives of what "ought" to be, is one of the avenues for opening new directions for the
organization or one of the ways of moving in new directions. This is not only useful but also vital for organizational survival. The
question, therefore, as we view conflict is not, "How to eliminate it?" but, "Is it conflict of such a type and within circumstances where it
will contribute to rather than detract from organizational interest?"
Whether a conflict is good or bad for an organization, whether a conflict can be made useful for an organization, depends not so much
on manipulating the conflict itself as on the underlying conditions of the overall organization. In this sense, conflict can be seen as;
1. a symptom of more basic problems which requires attention
2. an intervening variable in the overall organization to be considered, used, and maintained within certain useful boundaries.
Organizational adaptation frequently proceeds through a new arrangement developing informally, which, after proving its worth and
becoming accepted, is formally adopted. The first informal development, however, may be contrary to previously established
procedures and in a sense a violation or a subversion of them; or the informal procedures may be an extension of a function for internal
political purposes.

Programmed links

13

If the process had given instruction to report immediately on completion of the task, this instruction facilitates linking the completed act
with the next one. Through an information transfer, we call this a programmed link.
The supervisor node, of course may detect that something is wrong through another control cycle. It can then take corrective action by
including or adding into this programmed link or perhaps by attacking on the more difficult problem of human apathetic attitudes and
motivation his units could display.

Progression of goals
Organizations have progression of goals which result from a division of work.
A subdivide goal becomes the task of a process contained within a specialized organizational unit.
This nesting of goals is contained as part of the core organizational means-ends chain.
Needless to say, the hierarchy of control loops which are connected with the progression of goals may be handle in number of ways,
regardless of how the elements are allocated, the important factor is that all elements must be provided for in some way. Hence, our
model supplies an extremely useful tool in analyzing complex control situations by telling us what basic functions bust occur and in
what sequence, even though initially we have no idea as to where or how they are executed in the organization.

Goals and feedback
The feedback loop containing information about organizational performance and conditions leads to definition of subunit goals or
standards. It's important to show how a situation in one area could lead to modifications in a number of organizational units at higher
levels.
This even result in reformulating the basic goals of organizations. Feedback is essential to adequate goal formation.

14

Iterative programming
"Programming is an iterative process, iterative is another name for intelligent trial and error."
— Michael C Williams
In cybernetics, the word "trial" usually implies random-or-arbitrary, without any deliberate choice.
Programming is an iterative process with a large amount of trial and error to find out:
What needs to be implemented,
Why does it need to be implemented,
How should be implemented.
Trial and error is also a heuristic method of problem solving, repair, tuning, or obtaining knowledge.
In the field of computer science, the method is called generate and test. In elementary algebra, when solving equations, it is "guess
and check".

Cumulative adaptation
The existence of different available strategies allows us to consider a separate superior domain of processing, a "meta-level" above the
mechanics of switch handling from where the various available strategies can be randomly chosen.
Suppose N events each have a probability p of success, and the probabilities are independent. An example would occur if N wheels
bore letters A and B on the rim, with A's occupying the spun and allowed to come to rest; those that stop at an A count as successes.
Let us compare three ways of compounding these minor successes to a Grand Success, which we assume, occurs only when every
wheel is stopped at an A.
Case 1: All N wheels are spun; if all show an A, Success is recorded and the trials ended; otherwise all are spun again, and so on till
'all A's' come up at one spin.
Case 2: The first wheel is spun; if it stops at an A it is left there; otherwise it is spun again. When it eventually stops at an A the
second wheel is spun similarly; and so on down the line of N wheels, one at a time, till all show A's.
Case 3: All N wheels are spun; those that show an A are left to continue showing it, and those that show a B are spun again. When
further A's occur they also are left alone. So the number spun gets fewer and fewer, until all are at A's.
The conclusion that Case 1 is very different from Cases 2 and 3, does not depend closely on the particular values of p and N.
Comparison of the three Cases soon shows why Cases 2 and 3 can arrive at Success so much sooner than Case 1: they can benefit
by partial successes, which 1 cannot. Suppose, for instance, that, under Case 1, a spin gave 999 A's and 1 B. This is very near
complete Success; yet it counts for nothing, and all the A's have to be thrown back into the melting-pot. In Case 3, however, only one
wheel would remain to be spun; while Case 2 would perhaps get a good run of A's at the left-hand end and could thus benefit from it.
The examples show the great, the very great, reduction in time taken that occurs when the final Success can be reached by stages, in
which partial successes can be conserved and accumulated.
We can draw, then, the following conclusion. A compound event that is impossible if the components have to occur simultaneously
may be readily achievable if they can occur in sequence or independently.

Dynamic optimization
Dynamic optimization (also known as dynamic programming) is a method for solving a complex problem by breaking it down into a
collection of simpler subproblems, solving each of those subproblems just once, and storing their solutions. The next time the same
subproblem occurs, instead of recomputing its solution, one simply looks up the previously computed result, saving computation time
at the expense of (it is hoped) a modest expenditure in storage space.
Dynamic programming is both a mathematical optimization method and a computer programming method. In both contexts it refers to
simplifying a complicated problem by breaking it down into simpler sub-problems in a recursive manner.
If sub-problems can be nested recursively inside larger problems, so that dynamic programming methods are applicable, then there is a
relation between the value of the larger problem and the values of the sub-problems. In the optimization literature this is called the
Bellman equation.

15

Overlapping subproblems
Like Divide and Conquer, Dynamic optimization combines solutions to sub-problems mainly used when they are needed again and
again. In dynamic programming, computed solutions to subproblems are stored in a table so that these don’t have to recomputed. So
Dynamic Programming is not useful when there are no common subproblems because there is no point storing the solutions if they are
not needed again.

Optimal substructure
A problem is said to have optimal substructure if an optimal solution can be constructed from optimal solutions of its subproblems. This
property is used to determine the usefulness of dynamic programming.
The application of dynamic programming is based on the idea that in order to solve a dynamic optimization problem from some starting
period t to some ending period T, one implicitly has to solve subproblems starting from later dates s, where t
Source Exif Data:
File Type                       : PDF
File Type Extension             : pdf
MIME Type                       : application/pdf
PDF Version                     : 1.4
Linearized                      : No
Title                           : 
Creator                         : wkhtmltopdf 0.12.5
Producer                        : Qt 4.8.7
Create Date                     : 2018:08:31 18:05:22-05:00
Page Count                      : 24
Page Mode                       : UseOutlines
EXIF Metadata provided by EXIF.tools

Navigation menu