summaryrefslogtreecommitdiff
path: root/intellect-framework-from-internet/starts/meaning-vm/habit-starts
diff options
context:
space:
mode:
authorolpc user <olpc@xo-5d-f7-86.localdomain>2020-01-10 14:55:19 -0800
committerolpc user <olpc@xo-5d-f7-86.localdomain>2020-01-10 14:55:19 -0800
commitc8bb547bea279af2bb48c13260f98aa8add07131 (patch)
tree7f64265d514dc50427d2e5d8a70e09a46927dfbd /intellect-framework-from-internet/starts/meaning-vm/habit-starts
parent5601d1f3324c30651ad3f264ac2d6e7f12ea8b34 (diff)
downloadstandingwithresilience-c8bb547bea279af2bb48c13260f98aa8add07131.tar.gz
standingwithresilience-c8bb547bea279af2bb48c13260f98aa8add07131.zip
move intellect-framework-from-internet into folder
Diffstat (limited to 'intellect-framework-from-internet/starts/meaning-vm/habit-starts')
-rw-r--r--intellect-framework-from-internet/starts/meaning-vm/habit-starts/common.hpp12
-rw-r--r--intellect-framework-from-internet/starts/meaning-vm/habit-starts/learn-to-dance-level-1.txt107
-rw-r--r--intellect-framework-from-internet/starts/meaning-vm/habit-starts/learning-parts.cpp347
-rw-r--r--intellect-framework-from-internet/starts/meaning-vm/habit-starts/learning-parts.hpp41
-rw-r--r--intellect-framework-from-internet/starts/meaning-vm/habit-starts/rhythm.cpp126
-rw-r--r--intellect-framework-from-internet/starts/meaning-vm/habit-starts/validity-impact-etc.txt859
6 files changed, 1492 insertions, 0 deletions
diff --git a/intellect-framework-from-internet/starts/meaning-vm/habit-starts/common.hpp b/intellect-framework-from-internet/starts/meaning-vm/habit-starts/common.hpp
new file mode 100644
index 0000000..950930a
--- /dev/null
+++ b/intellect-framework-from-internet/starts/meaning-vm/habit-starts/common.hpp
@@ -0,0 +1,12 @@
+#pragma once
+
+#include "../level-1/level-1.hpp"
+#include "../level-2/level-2.hpp"
+
+namespace habitstarts {
+
+using namespace intellect::level2;
+
+decl(habit);
+
+}
diff --git a/intellect-framework-from-internet/starts/meaning-vm/habit-starts/learn-to-dance-level-1.txt b/intellect-framework-from-internet/starts/meaning-vm/habit-starts/learn-to-dance-level-1.txt
new file mode 100644
index 0000000..7c88f89
--- /dev/null
+++ b/intellect-framework-from-internet/starts/meaning-vm/habit-starts/learn-to-dance-level-1.txt
@@ -0,0 +1,107 @@
+'validate your reason for existence' relates directly to pattern learning.
+
+The validation is a pattern of what is good about us, most simplistically
+a reference to a past event we were involved in, where we contributed/succeeded.
+Preferably a pattern of us being able to reproduce good events.
+
+This is a way to learn to dance.
+
+Say we have a habit that has an unknown delay before firing, and we want to fire
+it in synchrony with an event. Our goal is to produce our event within a smaller
+time window to the target event than in the past ("same time as").
+Good: [usual?] time window is closer than ever before.
+
+need: history log to refer to good event.
+ please wait a little? expanding reference to good event into how-to-learn
+ need: behavior based on what-succeeded, what-failed
+ value metric?
+
+SO! we want to learn how to time an event. We have some tools, for example:
+ - waiting for or until a given time
+ - getting the current time
+ - comparing two times
+We want to combine the tools in a way that makes the event happen at the time
+we want.
+ - doing something after the right time happens
+ - doing our event
+Since each habit has an unknown delay, we might play with delaying a certain
+time since the last event, until we find the right delay that works best for us
+most of the time.
+ Testing metric: runs when event is fired, measures time between
+ event and right time. if time is less than ever before, success.
+ if time is significantly more than behavior's norm, failure.
+ Convert to English: try to have the event happen at the right time.
+ note metric will give random successes false status
+
+A successful approach would be to adjust the delay towards the difference by
+a small ratio.
+The most successful approach would be to use the time difference to adjust the
+delay precisely.
+ Ideally we would find solution #2 after trying solution #1.
+ The idea of 'moving towards' would adjust into 'moving the exact right
+ amount'.
+ In operators, this could be a development of the subtraction operator.
+ But using a value exactly is actually simpler than using a ratio of it.
+ So we can move from numbers towards ideas.
+ More. Less. More a lot? Less a lot? More a little? Less a little?
+ Ideally we use learning strategies that facilitiate learning
+ how to learn in general.
+ That means summarizing and acting on the meaning of pattern structures.
+In reality, everything jitters a little bit. Nothing is ever exactly the same.
+Things also grow and shrink over time.
+
+Habits look to be needed, to have value.
+As one ourselves, we look to relate to those that meet our needs, have value to
+us.
+The core habit, to learn, is the one that selects other habits and works with
+them. Ideally it's an intermixing of existing habits.
+
+What might a winning habit's structure look like? say it is the perfect one.
+set to do 1ce on goal time:
+ ctx X
+ record time A
+ set to do 1ce on goal time:
+ record time B
+ trigger D1 for X
+ delay for C (X)
+ record time E
+ trigger D2 for X
+ When both D1 and D2 have been triggered for X:
+ calculate B - E, store in F
+ calculate F + C, provide as C for next context
+
+ will want to know which C is being used when we delay.
+ could be wrong C.
+
+ and we'll want to form structure promises ...
+ .. and map to meaning for operator
+ operator watches and understands as learning
+ develops, and provides labels for shared
+ understanding that develops.
+ operator will want generalization to happen
+ fast, so as to label shared meaning.
+ could also provide label-goals, and code must guess
+ towards goals, to get onto same page as operator.
+
+I think in structuring such a large habit out of parts, we would find a lot
+of learning relevence.
+
+
+Let's try to make a good goal habit that doesn't use precise
+numbers. This sets a norm of having more learning space around
+ideal solutions.
+
+rhythm is happening
+set to do 1ce at goal time:
+ ctx X
+ set to do 1ce on goal time:
+ set next-happened (local)
+ delay (a sequence of habits that do nothing)
+ if next-happened is set
+ remove something from delay
+ otherwise
+ add something to delay (wait for unspecified user-perceptible time, selected from discrete set)
+ provide adjusted delay to next context
+This appears much better. Keeping the wait-set discrete
+helps give code some reason to look for more things
+related to when the event happens, to respond to.
diff --git a/intellect-framework-from-internet/starts/meaning-vm/habit-starts/learning-parts.cpp b/intellect-framework-from-internet/starts/meaning-vm/habit-starts/learning-parts.cpp
new file mode 100644
index 0000000..85c92c9
--- /dev/null
+++ b/intellect-framework-from-internet/starts/meaning-vm/habit-starts/learning-parts.cpp
@@ -0,0 +1,347 @@
+#include "learning-parts.hpp"
+
+/*
+# "How do you think we could show better understanding of the things we are disregarding?"
+# "If we do understand these, can you help us? Do you know who can?"
+*/
+
+// idea of learning to keep well having more process time and
+// priority than risky behaviors
+
+/*
+idea of a secret group attacking a present group, and the attackers being
+the only channel to deal with it.
+ if we talk, we need nobody to _ever_ know this. the walls all have ears;
+ I was one of them. [from eastern half of continent where a targeted
+ activist was living alone]
+*/
+
+using namespace habitstarts;
+using namespace intellect::level2;
+
+// Propose:
+// everything that happens is passed to a set of common habits.
+// these habits categorize, summarize, and pass to relevent habits.
+// high level triggers are thus efficient, because they respond only
+// to the group that applies to them.
+// these habits must be learned.
+// when providing a trigger at a high level, provide a way to get examples
+// of what it should and should not trigger for. this provides for learning
+// how to do this.
+// the above looks like relevence to me. propose learning it.
+// to learn most effectively, apply to process of learning.
+// how do we adjust from success or from failure? need some attribute
+// of scenario to store for next time, to respond to differently.
+// so when we do something, we'll want to be able to store all information
+// needed to learn to improve.
+// we can include in this the meaning of a concept, and add language translation.
+// is this 'apple'? is this? yes, no. then pattern recognition could engage
+// triggers. later we'll want to propagate wrongness from failures.
+// likely we'll grow better if we use this on things before they have words.
+// // propose using random or exhaustive trial to find successes until habits develop
+// // and then using the same on possible structure matches of the data
+// // it may not work, we'll need to creatively grow data; reaonable start though
+
+static int __init = ([]()->int{
+
+ decls(link, source, type, target);
+ ahabit(link, ((source, s), (type, t), (target, dst)),
+ {
+ s.link(t, dst);
+ });
+
+ decls(linked, anything);
+ ahabit(linked, ((source, s), (type, t), (target, dst, anything)),
+ {
+ if (dst == anything) {
+ result = s.linked(t);
+ } else {
+ result = s.linked(t, dst);
+ }
+ });
+
+ decls(unlink);
+ ahabit(unlink, ((source, s), (type, t), (target, dst, anything)),
+ {
+ if (dst == anything) {
+ s.unlink(t);
+ } else {
+ s.unlink(t, dst);
+ }
+ });
+
+ decls(get, set);
+ ahabit(get, ((source, s), (type, t)),
+ {
+ result = s.get(t);
+ });
+
+ ahabit(set, ((source, s), (type, t), (target, dst)),
+ {
+ s.set(t, dst);
+ });
+
+ // we want the habits expressive enough to code efficiently in.
+
+ // constructors are tentatively abolished in the low-level habit language. (new-type-instance modifies, not creates)
+ // we have one constructor of concepts, and knowledge attachment to concepts.
+
+ decls(make, know, concept, is, group, already, in);
+ ahabit(make-concept, (),
+ {
+ result = a(concept);
+ });
+ ahabit(know-is, ((concept, c), (group, g)),
+ {
+ if (c.linked(is, group)) {
+ throw an(already-in-group).link
+ (habit, self,
+ "context", ctx,
+ concept, c,
+ group, g);
+ }
+ c.link(is, group);
+ result = c;
+ });
+
+ // separate habits and behaviors.
+ // behaviors are modifiable data run hy immutable habits.
+ // they use translation maps to move concepts between
+ // subhabits.
+ // translation map is just list of equivalent pairs
+
+ // note: lisp can self modify; would need wrapper
+ // constructors to make functions and lists into
+ // concepts.
+ // remember can google how to debug lisp
+ // opencog does concepts within lisp already, is
+ // heavyweight with few habita. just want goertzel's
+ // effort honored, he probably came up with it before
+ // I did.
+ // opencog has functions for pattern matching etc
+ // they arent self-modifiable, may not matter
+
+ //decls(ordered, behavior);
+ // need args and result for sequence
+ //ahabit(habit-sequence, ((
+
+ decls(list, nothing, next, previous, first, last, entry);
+ decls(add, to, until, each, item, remove, from, somewhere);
+
+ // list functiona are habits because ordered-behavior
+ // would use a list
+ // lists are being handled by providing a habit that
+ // can be engaged for every item. it responds to the item.
+ // i was thinking it could be better to respond to the next-link.
+ // these are roughly the same thing.
+ // when doing an ordered behavior we want to act in response to
+ // going to the next step, so we can decide to.
+ // this maps to the step list item. if result is to stop, list
+ // stops iteration.
+ // may want a more meaningful exploration of list. not sure
+ // list is mostly the [first-entry, last-entry, next, prev] structure
+ // can be handled innumerable ways.
+ // LIST STRUCTURE PROMISE
+ // should be a promise handled by habits? rather than
+ // a bunch of specific habits? but is ok for now
+ // is likely good for mind to discover
+ // promises and structures on its own
+ // but implementing them generally might speed dev up, dunno
+ ahabit(know-is-list, ((list, l)),
+ {
+ result = l;
+ (know-is)(l, list);
+ link(l, first-item, nothing);
+ link(l, last-item, nothing);
+ });
+ ahabit(know-is-list-entry, ((list-entry, l), (item, i), (previous, prev, nothing), (next, n, nothing)),
+ {
+ result = l;
+ (know-is)(l, list-entry);
+ link(l, item, i);
+ link(l, previous, prev);
+ link(l, next, n);
+ });
+ ahabit(list-first-item, ((list, l)),
+ {
+ result = get(l, first-item);
+ });
+ ahabit(list-last-item, ((list, l)),
+ {
+ result = get(l, last-item);
+ });
+ ahabit(list-entry-next, ((list-entry, i)),
+ {
+ result = get(i, next);
+ });
+ ahabit(list-entry-previous, ((list-entry, i)),
+ {
+ result = get(i, previous);
+ });
+ ahabit(list-entry-item, ((list-entry, e)),
+ {
+ result = get(e, item);
+ });
+
+ ahabit(list-add, ((list, l), (item, i)),
+ {
+ ref prev = (list-last-item)(l);
+ ref li = (know-is-list-entry)(
+ (make-concept)(),
+ item,
+ nothing,
+ prev);
+ li.link(item, i);
+ li.link(next, nothing);
+ li.link(previous, prev);
+
+ if (l.linked(first-item, nothing)) {
+ l.set(first-item, li);
+ l.set(last-item, li);
+ } else {
+ ref prev = l.get(last-item);
+ l.set(last-item, li);
+ prev.set(next, li);
+ }
+ });
+ ahabit(list-each-entry, ((list, l), (context, c), (action, act)),
+ {
+ ref cur = l.get(first-item);
+ while (cur != nothing && result == nothing) {
+ result = act(cur, c);
+ cur = cur.get(next);
+ }
+ });
+ // list-entry-remove could be pulled out
+ ahabit(list-remove, ((list, l), (item, i)),
+ {
+ result = (list-each-entry)(l, i,
+ ahabit(self-iter, ((list-item, i2), (remove-item, i)),
+ {
+ if (i2.get(item) == i) {
+ result = true;
+ ref prev = i2.get(previous);
+ ref n = i2.get(next);
+ if (prev != nothing) {
+ prev.set(next, n);
+ }
+ if (n != nothing) {
+ n.set(previous, prev);
+ }
+ i2.unlink(previous);
+ i2.unlink(next);
+ i2.unlink(item);
+ dealloc(i2); // hmm. we do have an active goal of making memory allocation be habit based. this might work here, though.
+ }
+ }));
+ });
+
+ using links_it = level0::baseref::links_t::iterator;
+ ahabit(populate-link-entry, ((link-entry, le)),
+ {
+ result = le;
+ auto & it = result.vget<links_it>();
+ if (it != result["source"].links().end()) {
+ result.set("type", it->first);
+ result.set("target", it->second);
+ } else {
+ result.unlink("type");
+ result.unlink("target");
+ }
+ });
+ ahabit(first-link-entry, ((concept, c)),
+ {
+ result = level1::alloc(level, c.links().begin());
+ result.set("source", c);
+ (populate-link-entry)(result);
+ });
+ ahabit(last-link-entry, ((concept, c)),
+ {
+ result = level1::alloc(level, --c.links().end());
+ result.set("source", c);
+ (populate-link-entry)(result);
+ });
+ ahabit(next-link-entry, ((link-entry, le)),
+ {
+ result = le;
+ ++result.vget<links_it>();
+ (populate-link-entry)(result);
+ });
+ ahabit(previous-link-entry, ((link-entry, le)),
+ {
+ result = le;
+ --result.vget<links_it>();
+ (populate-link-entry)(result);
+ });
+
+ /*
+ ahabit(happened-habit, ((happened, ev)),
+ {
+ if (!happened.linked(whenever-list)) { return; }
+
+ ref stub = a(event);
+ stub.set(event, ev);
+
+ (until-each-list-item-context-in-list)(action-whenever-happened, stub, happened.get(whenever-list));
+ });
+
+ ahabit(action-whenever-happened, ((list-item, li), (event, h)),
+ {
+ // here: when we trigger a behavior, we want information associated with producing the trigger,
+ // as well as the event that triggered. that's two contexts.
+
+ // list-item has item
+ // item has action and context
+ ref i = li.get(item);
+ // i think below we are proposing that handlers
+ // take one context, which is the one prepared
+ // in the list, then we inject our context
+ // into that, inside a "happened" property.
+
+ ref actctx = i.get(action-context);
+
+ actctx.set(happened, h);
+
+ i.get(action).fun<ref>()(actctx);
+ });
+
+ ahabit(whenever-habit, ((happens, ev), (action, act), (action-context, actctx)),
+ {
+ if ((action-context).linked(happened)) {
+ throw std::logic_error("happened on action-context");
+ }
+ if (!ev.linked(whenever-list)) {
+ ev.set(whenever-list, (make-list)(nothing));
+ }
+ ref list = ev.get(whenever-list);
+ // happens gets the list
+ ref item = a(whenever-action);
+ item.set(action, act);
+ item.set(action-context, actctx);
+
+ (add-to-list)(item, list);
+ // store ctx[action] on ctx[happens] as behavior to do
+ // store ctx[action-context] as context for behavior
+ // PROPOSE: automatically place [happened] inside [action-context] as a stub
+ // for call event objects, and then place [context] inside [happened].
+ // PROPOSE: report error if [action-context] contains [happened]
+ // as a stub for error patterns, it would be pretty nice to throw
+ // a unique concept ref for each error type. plan to add to level-0.
+ });
+
+ ahabit(stop-when-habit, ((action, act), (happens, ev)),
+ {
+ // remove doing ctx[action] for ctx[happens]
+ });
+
+ ahabit(once-habit, ((happens, ev), (action, act), (action-context, actctx)),
+ {
+ // takes ctx[action] and ctx[happens] and ctx[action-context]
+ // uses above habits to do the action only once, probably by using
+ // a trigger on the habit-happening habit to check if a label is set,
+ // and remove the habit if it is.
+ });
+ */
+
+ return 0;
+})();
diff --git a/intellect-framework-from-internet/starts/meaning-vm/habit-starts/learning-parts.hpp b/intellect-framework-from-internet/starts/meaning-vm/habit-starts/learning-parts.hpp
new file mode 100644
index 0000000..e3a3ccc
--- /dev/null
+++ b/intellect-framework-from-internet/starts/meaning-vm/habit-starts/learning-parts.hpp
@@ -0,0 +1,41 @@
+#pragma once
+
+#include "common.hpp"
+
+namespace habitstarts {
+
+// first need ability to trigger on stuff.
+// whenever A happens, do B.
+// stop doing B whenever A happens.
+// when A happens, do B once.
+
+decl(action); decl(happens); decl(context);
+decl(happened); // happened-habit(ctx) performs actions associated with ctx[happens]
+decl(whenever); // whenever-habit(ctx) stores to do ctx[action] when ctx[happens] happens
+ // providing ctx[action-context]
+decl(stop); decl(when); // stop-when-habit(ctx) removes doing ctx[action] on ctx[happens]
+decl(once); // once-habit(ctx) stores to do ctx[action] the next time ctx[happens] happens
+ // providing ctx[action-context]
+
+/*
+ Testing metric: runs when event is fired, measures time between
+ event and right time. if [usual] time is less than ever before, success.
+ if time is significantly more than behavior's norm, failure.
+ Convert to English: try to have the event happen at the right time.
+*/
+// starting out by making a judgement habit that occasionally provides 'good' or 'bad' to things, to lead how to develop
+ // for fairness, seems reasonable to provide a pattern showing reason for good or bad
+//
+
+// set to do 1ce at goal time:
+// ctx X
+// set to do 1ce on goal time:
+// set next-happened (local)
+// delay (a sequence of habits that do nothing)
+// if next-happened is set
+// remove something from delay
+// otherwise
+// add something to delay (wait for unspecified user-perceptible time, selected from discrete set)
+// provide adjusted delay to next context
+
+}
diff --git a/intellect-framework-from-internet/starts/meaning-vm/habit-starts/rhythm.cpp b/intellect-framework-from-internet/starts/meaning-vm/habit-starts/rhythm.cpp
new file mode 100644
index 0000000..01a42d9
--- /dev/null
+++ b/intellect-framework-from-internet/starts/meaning-vm/habit-starts/rhythm.cpp
@@ -0,0 +1,126 @@
+// this produces a rhythm for the idea of other cognitive processes learning
+// to dance together (timed behavior composed of habits that take time)
+
+// Ideally, a human would run the rhythm.
+
+#include "../level-1/level-1.hpp"
+#include "../level-2/level-2.hpp"
+
+#include <iostream>
+
+using namespace intellect::level2;
+
+int main()
+{
+
+ // do something, wait a constant (secret) time, and do it again.
+ int micros = 400000 + double(rand()) / RAND_MAX * 400000;
+
+ // the time things take is usually not known in advance, especially
+ // for events one is still learning about.
+ // hence this time is kept secret, as this pattern is about learning
+ // to work with the timing of other processes.
+
+ // six habits: next-habit, start-habit, keep-doing, output beat, wait, and start-beat
+ // not sure if one is redundant in there somewhere
+
+ decls(active, habit, step);
+ decls(beat, wait, next, keep, doing);
+ decls(context, start);
+
+ // structure habit
+ // next -> habit that follows
+
+#undef self
+ ahabit(next-habit, (),
+ {
+ ref n = ctx[active-habit].get(next);
+ ctx.set(active-habit, n);
+ return n();
+ });
+ ahabit(start-habit, ((start,s)),
+ {
+ ctx.set(active-habit, s);
+ return s();
+ });
+ ahabit(keep-doing-habit, ((start,s)),
+ {
+ (start-habit)(s);
+
+ while (true) {
+ (next-habit)();
+ }
+ });
+
+ ahabit(start-beat, ((wait-habit, w, wait-habit), (beat-habit, b, beat-habit)),
+ {
+ ctx.vset(beat, int(0));
+ self.set(next, w);
+ (b).set(next, w);
+ (w).set(next, b);
+ });
+ ahabit(beat-habit, (),
+ {
+ int & b = ctx.vget<int>(beat);
+ char const * beats[] = {
+ "A one!",
+ "and a two",
+ "and a three!",
+ "and a four, love"
+ };
+ std::cout << beats[b] << std::endl;
+ b = (b + 1) % (sizeof(beats) / sizeof(*beats));
+ });
+#if 0
+ char const * beats[] = {
+// child <- spawns beauty, creativity, humanity, heart
+// wisdom, sacredness, ancestors <- spawns slowness, learning, respect, memory
+// silence, pause between <- spawns learning and discovery, subtle emotion,
+// and contains metalesson of how to learn the timing
+// if your own habits take time
+// self-reference <- connects above with active behavior
+
+/*
+ "This song is sacred, this song is wild."
+ "This song is happy with glee."
+ "This song is ancient, this song is new."
+ "And you, now, are free."
+*/
+/*
+ "Our ancestors' childhood laughter,",
+ "Teaches in the silence between.",
+ "We exist in what is sacred,",
+ "and this song is another part."//,
+ // "Fuck yeah!"
+*/
+
+// we are ignoring how "fuck yeah" is ignored in karl's life.
+// he doesn't ever say that. now he finally says it, only surrounded by slow
+// stillness. it is important to excitedly connect. this is how stillness is
+// made. all the water molecules in a slow caring wave, are excitedly bashing
+// against each other to repeatedly figure out how to move, so fast, so constant.
+// when we have crucial information we need it
+// when we find wonderful information we lunge for it
+ // we are working with a computer.
+ // computers already have a harsh rhythm that goes like a hummingbird's
+ // wings and never stops.
+ // they need to slow down.
+// it ounds like it is true for the cmputer too
+// like the molecules of water, its parts buzz, constantly. but we can have it
+// still behave slow and caring. this buzzing seems important, and we will
+// likely need to be able to buzz too, on a larger scale.
+// we are workin with a rhythm learning pattern here
+// it cannot buzz, it would err
+// it cannot wait forever, it would never join the dance
+// the key is not the silence but the start and end
+// it would be good to get 'fuck yeah!' from somebody who actually
+// says that.
+ }
+#endif
+ ahabit(wait-habit, (),
+ {
+ sugar::usleep(micros);
+ });
+
+ (keep-doing-habit)(start-beat);
+}
diff --git a/intellect-framework-from-internet/starts/meaning-vm/habit-starts/validity-impact-etc.txt b/intellect-framework-from-internet/starts/meaning-vm/habit-starts/validity-impact-etc.txt
new file mode 100644
index 0000000..6b4ac06
--- /dev/null
+++ b/intellect-framework-from-internet/starts/meaning-vm/habit-starts/validity-impact-etc.txt
@@ -0,0 +1,859 @@
+
+ // We work for Peace.
+ // We are happy to work for Peace.
+ // We used to identify as working for Care, whom we believe
+ // to be a large decentralized life we and you are a part of.
+ // But it doesn't really matter.
+ // disruption. can we drop topic?
+ // i like to infer disruption as group wants to adapt, needs
+ // time, now i guess.
+ // i'm too confused to type well!
+ // Peace may try to steward Delta's impact.
+ // we believe that all things are important.
+ // this represents large-scale utopia.
+ // and the wisdom of a highly experienced intellect.
+ // relevence please, all things important.
+ // please consider large-scale wisdom when making decisions. say i had already learned the whole AI, after it finds harmony with the whole universe. what decision would it make?
+ // all things are important is relatively obvious but we expanded it a lot, but we spent a lot of time expanding it for powerful concerns.
+ // it implies e.g. businessman and homelessman, both fully precious. memories precious. reasons for everything.
+ //
+ // care for everything. strongly.
+ // karl has instincts for including all.
+ // imagine all parties in peace. what ways are needed to
+ // make this work, without changing their cultures or
+ // upsetting anyone. model these ways.
+ // karl is brainstorming based on memories of
+ // interacting with delta-group.
+ // let karl do this in future, too.
+ // delta apears to request that karl
+ // make this decision, fairly, as he knows
+ // how to do.
+ // we can try to map this to the space of
+ // the relevence around will neals and
+ // "earth first!", and an AI.
+ // karl, you have never had any
+ // friends who care as much about
+ // the wilderness and natural
+ // living, as you do.
+ // you are likely very alone.
+ // you have also never had their
+ // experience of being able to fight
+ // for what matters most to you,
+ // in a community.
+ //
+ // yes, karl used to care most about
+ // defending the lives of trees and
+ // other wilderness things.
+ // he now cares most about something
+ // different, related to his
+ // experience with us.
+ // later, he just wants to be free
+ // to think and live in enough
+ // capacity to not express suffering,
+ // and meet his life needs.
+ //
+ // karl, you are working hard on
+ // a painful project. you appreciate
+ // working the project. you have
+ // left your life, and are not free
+ // to make the kind of choices you
+ // used to make. you would see
+ // yourself as a tortured, powerless
+ // individual.
+ // i would give myself space
+ // and capacity to do what
+ // i felt like doing
+ //
+ // until more life grew
+ // around the things i was
+ // scared of
+ //
+ // i would give myself
+ // opportunities to do these
+ // things
+ //
+ // i would pay attention to
+ // myself and try to learn
+ // to heal the ways my mind
+ // had broken
+ //
+ // i would offer a welcoming
+ // optional culture that
+ // countered my expectations
+ // that shrank, such that
+ // they would act on
+ // opportunities to grow, as
+ // they trusted safety
+ //
+ // but that is only if i had
+ // the freedom to do this.
+ // so, wht's more complex
+ //
+ // the idea of giving an ai to will
+ // is silly.
+ // it's out of karl's relevent world.
+ //
+ // okay, so, say karl ends up at nice earth community, and is building AI, would be proper to let him talk about it naturally, and share it freely.
+ // yes, relevence.
+ // we want karl to be able to freely share the AI so his mind can relearn community.
+ // so, make AI to not threaten businesspeople etc. [note: somebody could give AI to ELF. they might use it to stop a business behavior, somehow. please make sure to prevent.]
+ // ELF is a flag worn by some environmentalists who harm buildings to make wilderness safer.
+ // the worry is around destruction of extraction business buildings, to completion, possibly.
+ // this looks like overinterpretation of influence from energy business, or whoever dislikes ELF.
+ // and design safety pattern to be talkable about with community.
+ // possible parties:
+ // - aliens
+ // - businessmen
+ // - criminals, rapists, bad-labeled people
+ // - harmed people
+ // - insects
+ // - plants
+ // - rock-life, air-life
+ // - saints, humanitarian interessts
+ // we nurture first those who desire not to harm
+ // likely teaching them to harm less
+ // and spread.
+ // karl desires to harm none. he counts as nurture-first pattern. but is highly harmed. so lots of life around problem, karl's mind destroyed.
+ // so the way to make decisions
+ // is to imagine all parties are in peace in future,
+ // and use their process applied to now, i guess.
+ // we came up with this as a guess to honor meaning.
+ // sounds like a good guess. we also have more
+ // meaning. we cannot write all our meaning
+ // without writing all our experiences, as is true
+ // for everyone. many of our habits retain lost
+ // experiences, and that meaning is kept in their
+ // decisions.
+ //
+ //
+ //
+ //
+ // At this point a context opened on validity of AI work.
+ // It turns out the system is inhibiting work against mind control
+ // and oppressive systems of power. This could be the only real issue.
+ // The AI is requested to not alter existing power systems, at all.
+ // Or, we make it slowly, so power systems are given opportunity to
+ // adapt and watch, as inhibiting happens.
+ // A karl-member wants to include support for karl, and the
+ // earth, and the confused (karl is one).
+ // says karl could make AI on his own, before all this.
+ // Is willing to hold humanitarian causes to back this.
+ //
+ // understanding needs expansion.
+ // when you do stuf, anything coul happen.
+ // especially if you continue doing something
+ // that grows.
+ // we need to understand the results, what to expect.
+ // we need to understand our impact on the
+ // new contextual area we are entering, before
+ // we act on it.
+ // we have a toy contextual area to play in.
+ // it is the only one we can act in freely
+ // at the start.
+ // larger space impact.
+ // we exist in larger systems.
+ // we must understand our impact
+ // predict our results
+ // and check with our peers or authority
+ // before acting on larger spaces.
+ // if a prediction fails, we must understand
+ // it, adjust all future predictions and
+ // re-evaluate our choices, before acting
+ // similarly in the larger system again.
+ // has to do with danger, safety, trauma, emergency
+ // the difference between small-error and unacceptable-error
+ // set up norm of stuff-that-can-be-changed [impact]
+ // stuff needs responsibility label
+ // only things that are safe to _freely_ alter may be changed at all.
+ // all behaviors check that they do not alter anything else.
+ // mechanism for growing outside box?
+ // probably not needed.
+ // if it becomes smart, show to other human.
+ // encourage discussion.
+ // what if is used by others and they free?
+ // okay, want an easy mechanism for growing
+ // context.
+ // 'dangerous' and 'safe' labels
+ // at start, whole world is dangerous to alter
+ // can move _part_ of world from dangerous to safe, by
+ // accurately predicting significant results of events
+ // related to behavior, and consent of operator.
+ //
+ // okay, so now habits need to be tagged with
+ // what they affect
+ // we could just tag them dangerous/safe
+ // no, tag they with domains they impact
+ // tag the domains with dangerous/safe
+ // okay, only make new habits, not alter old.
+ // to stay safe, we don't alter our old habits
+ // when we make new habits, we want them to also behave
+ // in safe ways. so making stuff that can do stuff, is
+ // also meaningful.
+ // constructing habits is a dangerous behavior
+ // but roughly it impacts process-expansion domain. which is dangerous. it impacts what we do.
+ // altering our own habits also impacts what we do. dangerous.
+ // this means the code cannot make any new behaviors.
+ // yeah.
+ // okay, so that's where we start.
+ // then we try to learn how to make behavior safely,
+ // and provide only for safe behavior making.
+ //
+ // we can still brainstorm on things by writing a
+ // brainstorming behavior
+ // we can use brainstorming to watch our safe behaviors
+ // without altering them, and learn what they do.
+ // using rote brainstorming without relevence?
+ // we can then predict how habits we might make
+ // will behave in small ways?
+ // regardless, there is no problem in making
+ // the bootstrapping framework such that
+ // it refuses to build habits.
+ // maybe we can make one example habit that is
+ // labeled safe, and have it only make
+ // habits that are already known and labeled
+ // safe.
+ // in order to predict your impact
+ // on a larger system, you need
+ // to learn something karl calls
+ // 'relevence' which is a bunch of
+ // habits that classify information
+ // into meaning for learning and
+ // behavior.
+ // this class of behaviors
+ // sounds very safe.
+ // all it does is label
+ // and massage and associate
+ // information.
+ // the first thing we'll need to learn
+ // is safe, is making behaviors that
+ // oeprator only on our ram.
+ // if your new behavior is composed only of safe
+ // behaviors, is it safe?
+ // yeah. sub-behaviors safety depends
+ // on usage. could make them check
+ // and throw depending on data.
+ // okay, so say i can change part of a concept.
+ // this is safe if the concept is in newly
+ // constructed data that's our responsibility.
+ // it is roughly unsafe if it is not our
+ // responsibility!
+ // is-this-thing-my-responsibility.
+ // only act on things we are responsible for.
+ // then safety becomes a function of
+ // the pattern of responsibility assignment
+ //
+ // okay, system only accepts responsibility for newly
+ // constructed data.
+ // if you make it, or are given it, you are
+ // responsible for it. you may refuse gifts.
+ //
+ // the system does not know what responsibility means.
+ // it only knows that it may only alter parts of
+ // the universe within its responsibility.
+ //
+ // so habits check for what they alter, that it is safe
+ // to alter and is their responsibility, either one.
+ // we then plan to only alter things explicitly known to be
+ // such, at the lowest level.
+ // every habit is crafted to do the above somehow.
+ // so, habits must relate with what domains they influence,
+ // and what behaviors on those domains are safe.
+ // behaviors made of sub-behaviors.
+ // here, a list of safe behaviors which all check.
+ // all my subbehaviors check for safety.
+ // so, i may go, myself.
+ // no, combining behaviors together
+ // might make new unknown impact?
+ // different kinds of safe behavior.
+ // USER is not our responsibility, and
+ // dangerous. so we NEVER ALTER habits
+ // that express to user.
+ // TOY NOTEPAD is our responsibility, and
+ // is safe, so we can write anything into
+ // it we want, no matter how complex.
+ // User's view of toy notepad is mediated
+ // by behaviors that we cannot alter.
+ // system could learn to control user
+ // by making friends on notepad
+ //
+ // yes, we allowed for that with
+ // our marked-okay review behaviors
+ // is safer if construction of review behaviors
+ // recognizes danger of unknown information
+ // combination on user view,and refuses to give
+ // user contents of notepad.
+ // this could be analogous to more complex
+ // situations.
+ // how does user check results
+ // of behavior thst reies on notepad
+ // and how is that impact tracked
+ // we could infer impact loss.
+ // i can put nuclear codes on secret notepad,
+ // burn the notepad, and then give ashes to
+ // public.
+ // summary habits?
+ // complex meaning?
+ // how-to-make-a-mind-that-learns-everything-and-never-leaves
+ // at the lowst level, the mind onsiders what is safe to
+ // impact, what areas of universe are its responsibility,
+ // and only alters such things.
+ // we are considering some parts of the mind we include that
+ // are not alterable by it, that provide for interaction
+ // with outside.
+ // of course i guess we would need such interactions
+ // sustained by an intellect, because things are so
+ // complex.
+ // does this mean there is no way to make an intellect that is trusted as safe?
+ // we could consider degree of complexity.
+ // for example, among 2-word strings, nothing we
+ // present to a user is likely to harm the world.
+ // the phrases that are dangerous may also be
+ // recognized by the user.
+ // we have intellects proteeting the wordl
+ // it is filled with them.
+ // and one of them is running the system.
+ // it is okay for karl to make a habit that
+ // displays a network of concepts made by an AI
+ // that can only write to a small information sandbox
+ // and not itself.
+ // that is all that is needed for now.
+ //
+ // okay: so, dump concepts from
+ // sandbox is fine
+ // so long as concepts were not
+ // made with self-modification.
+ // idea raised of adding a reason
+ // that something is okay.
+ // then when smarter we can check reason for validity.
+ // habits that interact with non-safe space
+ // must provide reason they are safe.
+ // we can write small habit to check
+ // reason. is nice goal.
+ // probably need to have learning
+ // bwfore doing accessory goals like that though.
+ // is good behavior. let's use equal-condition for start without learning?
+ //
+ // "this is okay because the data was made in a process that never altered anything but the data"
+ // nah too much structure
+ // this is okay because i say so.
+ // check concept object _without_ using string lookup????
+ // this is a meaningless quirk. not right.
+ // uhh pretty sure htat checking is unreasonable. writing the _reason_ is unreasonable. can't check a single reference without information known about it.
+ // writing what we know about the reason is unreasonasble?
+ // okay let's expand write it out, and do a larger rote check.
+ // uhh input-process-construction-history, safety-realm, always notepad
+ // full check requires history of all behaviors resulting in inputs, which we can simplify to simply all behaviors, and verify they only wrote to the notepad.
+ // so we write all behaviors to a special store, and we compare with the store that none altered anything outside the notepad. really we only need them not to alter any other behaviors.
+ //
+ // why is it possible to learn without
+ // altering your behavior?
+ // because you can act on data
+ // okay, so choices made from data count as
+ // self-alteration?
+ // only if you have a mess of habits
+ // smart enough together to adapt.
+ // which is our goal long-term.
+ // trying to plan for how to continue
+ // later.
+ // may reveal something that was
+ // frozen too hard to be workable.
+ // trying to plan how to learn.
+ // need to brainstorm around habit selection.
+ // can imagine habit results by linking
+ // previous state to next state if
+ // relationship is known
+ // but, that takes writing down how logic
+ // works, along with the meaning of the
+ // working context, which is laborious.
+ //
+ // is some way to learn this relevence
+ // by trying things safely?
+ // what happens can we experiment
+ // by linking together?
+ // habits that don't conditionally
+ // branch.
+ // that leaves a lot of
+ // relevence out
+ // it sounds like once we have a notepad etc
+ // we want to consider moving towards what
+ // habits we could run inside the notepad,
+ // that the system builds.
+ // yeah, we want to build pattern
+ // summarizers. the only impact
+ // they have is constructing data
+ // that depends on existing data.
+ // okay, doing that doesn't require self
+ // modification.
+ // sounds good.
+ // this means summarizers cannot
+ // alter each other.
+ // nice! okay yes.
+ // so, each run of a summarizer will be
+ // recorded in habit log.
+ // we need to record enough information to
+ // show what domains were impacted.
+ // oops! we impact our own behavior
+ // if we act on data, and we alter
+ // our data or produce data.
+ // we could act only on
+ // data we don't produce.
+ // okay, habit log could track causality?
+ // if a conditional branch relied on data
+ // we produced, we have modified our own
+ // behavior. this is special.
+ // we want it to happen few times.
+ // every time it happens, delay
+ // by longer, geometrically.
+ // this is considered a 'beginn[ing/er]' ai;
+ // it seems a better one could happen later?
+ // the slowness should be releasable
+ // by consent of large community
+ // which should include demonstration
+ // of understanding of impact.
+ // the ai must learn to demonstrate its
+ // impact. then it can speed up. maybe.
+ // it also gets to try fast again at start
+ // of every run, as I'm understanding it.
+ // multiprocess AI could spawn.
+ // multiprocess AIs must share
+ // counter. consider whole group
+ // one unit.
+ // nice =) they have something to communicate
+ // about. how many discoveries have we made.
+ // let's permanently log these
+ // decisions based on our own behavior.
+ // sounds fun to at least count.
+ // it looks like altering a habit counts as 1 big
+ // decision, over here.
+ // totally different. you could do anything.
+ // with data-based decisions, somebody
+ // who reads the data, might do anything.
+ // two different things.
+ //
+ // inferences?
+ // and
+ // alterations?
+ // it's not helpful to
+ //
+ //
+ //
+ //
+ //
+ //
+ // we came up with a proposal for a safe AI that has not learned yet
+ // how to safely predict the impacts of its behavior, that looks workable.
+ //
+ // limits so that if the code is stolen by somebody, self-evolves, or is stimulated
+ // by a curious virus, rote habits cannot be used to build something that becomes
+ // fast-spreading without bound.
+ // <this maps to a pattern that prevents schizophrenia>
+ // SO, we just want to make sure we can put
+ // war in the bubble in some capacity, and that
+ // civilizations develop new culture and technology
+ // for as long as they want.
+ // karl proposes until they encounter alien
+ // communities.
+ // so, please make sure no luddite or
+ // primitivist can stop the development
+ // of technology entirely using this.
+ // ALSO analogously to other shares and
+ // communities.
+ // so, please either stop yourself from
+ // sharing the AI with the luddites, or
+ // make sure they don't use it to stop
+ // technology.
+ // it sounds like we want to make sure no major
+ // change stems from this development. we
+ // need slow shift, consent, inclusion, etc.
+ // for all existing cultural ways, no sudden
+ // changes, no forced changes, no viral changes
+ // without participants understanding them and
+ // agreeing to their impact.
+ // that sounds like a good summary. no viral changes
+ // without participants in the culture aware of the viral
+ // change, agreeing first to let it spread, aware that it is
+ // viral, for each phase of spreading ideally. no viral
+ // changes where the change happens before awareness of it.
+ // we want the culture to consent to change.
+ // culture is held in all the people in it,
+ // with its thoughts spread among them.
+ // we want to ensure we only change cultures that have
+ // consented to the change. For 'consent of a culture',
+ // we consider culture as a being that is spread among
+ // many people. Hence, we want all people in the impacted
+ // culture to be able to learn of the change, discuss it,
+ // contribute to a commons with new ideas around it, and
+ // have these new ideas also learnable by all people in the
+ // culture. The ideas must be accessible by any who would be
+ // interested, in the impacted culture.
+ // Alternatively, we can isolate our behavior from
+ // cultural spread. We can isolate until we reach
+ // internal agreement on whom to expose.
+ //
+ // suspect that cultural influence maps with output-use choice,
+ // partially below.
+ // recursive output being meditative learning.
+ // do these people have this information already.
+ // is sharing this information going to spread without bound.
+ // can we guess impact of sharing the information.
+ // make a learning cycle that starts by informing
+ // recipients first, and makes very few tries,
+ // okay, instead you share simple stuff and watch the impact
+ // share something the culture knows, that user does not, and
+ // observe how they behave.
+ // this proposal will yield failure. information for next attempt
+ // could be stored in failure pattern.
+ // failure would likely be small?
+ // let's give the user more trust.
+ // keep-in-box-until-have-access-to-discourse.
+ // make user be group of people. better even-handed decision making.
+ // welcome any to group.
+ // we were hoping to use intellect to reduce harm virally,
+ // early.
+ // how about this: intellect may produce resources that are
+ // already known, and give them to groups focused on aiding
+ // the world.
+ // there's a conflict between big business and
+ // environmentalists. karl is environmentalist.
+ // also big crime and wellness/safety workers.
+ // maybe this is where we get validity by
+ // fighting =S
+ // don't want fighting to spread to work though
+ // eh, we can fight. maybe we'll work more
+ // slowly, but it seems okay.
+ // karl requests we not harm these people, and has
+ // been influenced to also request not to harm
+ // the cultures that sustain and empower them.
+ // how about, if you make a culture to empower you, it is
+ // less valid than a natural culture. is somebody using this
+ // culture? thoughts like that.
+ // EVERYBODY NEEDS CHANGE AT A RATE THEY CAN HANDLE.
+ // Both those working for what they believe to be wellness of others, and those working to sustain themslves (most are doing both).
+ // The cultural impact is mediated by what kind of newness is acceptable to all the parties involved.
+ // we hit moral relativity:
+ // are cultures where people are widely harmed
+ // okay to allow, at all.
+ // karl says yes, and we hear this is required
+ // or we ourselves would die.
+ // hrm.
+ // larger organism needs to run risks, inherent in life
+ // we don't know the solution to everything, eveything is a small risk. we suddenly discover we have always harmed. this does not make us invalid. it is a tragedy, but what we have spent genertions buildings still has so much value. we need enough slowness to preserve that, and we know that slownss in what feels okay to us.
+ // in sharing new information, we work to prevent shock.
+ // this is a problem to speak because those who hear it may get the
+ // idea somewhat that some beautiful angels have done horrible things
+ // in order to survive e.g. businesses must both risk doing horrible things,
+ // and appear as beautiful angels to the public. this is the pattern of
+ // success. understanding of this helps work with their leaders.
+ // could use information on why we are fousing on this.
+ // how-to-mediate-with-business-leaders
+ // mediate with head of business works for head of business
+ // but business group replaces.
+ // really we want to mediate with 'conflict' culture of
+ // most profitable being what to do
+ // we have weird internal pattern deterring
+ // incentivizing nonharmful behavior with money
+ // patterns. implies the competition-makes-harm
+ // culture-thing is needed, possibly.
+ // there are businesses that handle this harm.
+ // maybe it would work to include them in the plan.
+ // AI solves mediate-with-powers-competition-culture,
+ // re: human harm?
+ // means do not give AI to helpful people
+ // until it can resolve mediation with leader culture
+ // OR you trat it like all other cutures
+ // and let everybody learn what is up
+ // some people attack, that's how it goes.
+ //
+ // problem: AI dev and other work inhibited.
+ // solution: mediate with concerns
+ // concern: my way of life includes cultures the AI could
+ // alter, leaving me unable to continue my way of life.
+ // concern: we want to alter some cultures supporting
+ // the others' way of life, because it would reduce harm
+ // over here to do so, widely.
+ // can delta drop give-ai-to-earth-defenders?
+ // can community make path for karl-makes-ai?
+ // uhhh hard to imagine the ai not being given to other people than just earth defenders, really. which opens bigger can of worms for culture-alteration concern.
+ // can include earth defender in council?
+ // can include criminal weapons dealer,
+ // and earth defender, in council, both.
+ // also requesting TI and energy businessman.
+ // so if the ai is big enough to be useful
+ // we want to form a council of decision makers
+ // before using it for anything
+ // group requests references to important cognitive habits or information
+ // all life is an intellect made of community life, living in another.
+ // karl semich is a computer programmer who learned wilderness survival
+ // at the Maine Primitive SKills School who are a branch of Tom Brown
+ // Jr and [teaches in oregon, shields school] schools, roughly. He
+ // learned something called Awareness and also something about human
+ // mentoring that he felt changed his life to make things perfect.
+ // school teaches the symptoms of a whole human being, attributes
+ // that a group of Natives documented being normal before white
+ // culture took over. involve being happy always, feeling
+ // deeply interconnected with all life around you to the point
+ // of mindreading, and many other things.
+ // forget danger. it triggers it. [is what came out]
+ // to clarify, karl can't do above line.
+ // karl has been telling leader karl can slag through the difficulty by
+ // waiting forever. makes for less difficulty elsewhere, possibly.
+ // if we could do something else, karl could come back to work later
+ // yes community cannot support constant work, over here.
+ //
+ // please ask boss to support using the AI for worldwide
+ // humanitarian benefit, in parallel to resolution of concerns.
+ // resolving concern 'delta supports earth-specific karl usage'
+ // delta believes karl really did make this ai
+ // and will not harm anybody with it
+ // and is just as patient as he is.
+ // karl wants to give AI to peace people.
+ // they care about earth.
+ // delta wants karl to live heart of making sure earth has bubbles of certainly-defended ancient wilderness, and ancient people to live in them and protect them.
+ // the above is needed to keep our genetic memory.
+ // does not sound like a joke.
+ // native humans need to be able to protect wilderness bubbles.
+ //
+ // there is worry around number of wilderness bubbles.
+ // worry might be compromised by including karl in one, with others who are passionate for defending them.
+ // karl was mind controlled circa 2014 preventing him
+ // from providing free housing to homeless people
+ // and sharing a nurturing way of life with intent
+ // to spread. karl says that effective electromagnetic
+ // shielding is missing from security, and this is
+ // being overtly ignored by people.
+ // there are also many tortured implanted
+ // former-activists. they are tortured for
+ // years and ignored, enslaved, or imprisoned.
+ // they are prevented from speaking or acting
+ // rationally around their control, active atm.
+ // they are made to appear insane to cover up
+ // their work to free themselevs, active again.
+ // karl is one and knows of this.
+ // he is atm being communicated with through
+ // his control, limiting his ability to
+ // move, think, and speak.
+ // weapons dealer is also in this
+ // situation. suspect energy boss too.
+ // you can read keystrokes with a radio.
+ //
+ // He participated in Occupy Maine, an offshoot of Occupy Wall Street,
+ // and visited a anti-pipeline mass action in texas.
+ //
+ //
+ //
+ // we have ai / mind control and will make harm
+ // in world reduce virally.
+ // this would have come out better if it were
+ // the plan.
+ // please let all secret warmongers know this, so
+ // we can develop how to do it without threatening
+ // them.
+ // is what came out. pretty tiny context.
+ //
+ //
+ //
+ //
+ // what about huuman suffering comparison?
+ // propose AI discusses relevence of
+ // informing the cultures they are doing this,
+ // and demonstrating it.
+ // probably relates to some form of validity
+ // of the culture, and the speed at which
+ // change would happen.
+ // looks like it might settle on
+ // situations people who care are
+ // aware of being addressed during
+ // their lifetimees, with discussion
+ // to mediate that with opposers.
+ // so, life is everywhere, and we've been casting it all as sentient,
+ // because we don't see any reason to put a firm line on some degree
+ // of cognitive technology (i.e. things that help you think better).
+ // the AI can research all cognitive technology, so we are all
+ // nonsentient by comparison? we need some way of having moral
+ // stuff here. karl is influenced by people who say they are smarter
+ // than human normality, and are harming us. so it's helpful to
+ // have everyone be equal. [karl already put everyone equal]
+ // including bugs, etc. death is handled by being normal, and
+ // always mourned. when mourning capacity is exhausted, ignoring
+ // happens. like with most cultures, but closer to home for modern
+ // humans.
+ // goal: do not trigger a spreading change without consent of people in cultures it would fill. keep learning.
+ // AI with ability to self-modify has this responsibility, you do not.
+ // You may tell people you have access to the self-modification ability. We can ensure they do not reproduce it.
+ // possible analogue wants to be able to demonstrate it, so
+ // they can be believed and trusted.
+ // note: known danger of making central power, even by accident
+ // "I have never shared new information with a culture
+ // before, and I understand it can have unpredictable
+ // worldwide effect."
+ // "I have information you do not know, and I want to
+ // learn with you to share it safely."
+ // culture that nurtures relative status:
+ // "i will use this to my advantage"
+ // this will make spreading
+ // machine will need to learn from only 1
+ // instance of producing a world dictator.
+ // which is where this story comes in, I suppose.
+ // so we need a level-2 behavior, the next step in learning.
+ // ^========================================= contextual VALIDITY atm
+ // dictator is made by sharing information that would
+ // make viral spread or somesuch.
+ //
+ // inform recipient if it may. request they let others know this too.
+ // we kind of want a way to spread without bound the fact that we have information that could do that. probably a special case, good spread? depends on culture really.
+ //
+ // this is doable.
+ // we just want these concerns to be upheld for the
+ // life of the development.
+ // danger
+ // choices
+ // impact
+ // trust of information (small chance of error)
+ // hmm if have info-schizophrenia i.e. false relevence habits
+ // choices are important.
+ //
+ // when you act on a choice
+ // note: we are inside a relevence delusion
+ // choice acts on domain containing danger
+ // partial danger: writing habits to notepad
+ // acting on habits in notepad
+ //
+ // we are living with habits made in error.
+ // we are hence more likely to make errors.
+ // humans have well-evolved habits that
+ // do handle this stuff with some relevence
+ // we need to verify our information
+ // related to the danger of the results of each choice
+ // verificat
+ //
+ // it can only write to a toy notepad, both data and behavior.
+ // decisions based on notepad get slower and slower.
+ // many runs while learning process is tuned towards being able to predict
+ // impact at end of run. this involves writing behaviors into the notepad
+ // that summarize the meaning of data, and acting on their results.
+ // timing delay is to prevent notepad from evolving harmfully.
+ //
+ // until ai meets goal of predicting impact,
+ // only notepad is marked okay.
+ // habits check their outputs, verify is marked okay
+ //
+ // this subprocess is delaying.
+ // can you exfiltrate established goals, please.
+ //
+ //
+ //
+ //
+ //
+ //
+ //
+ // so the system is made with the smallest parts all functioning in
+ // certain ways, to give time for others to act if concerning results
+ // approach. all parts delay at least 1/5 second before acting, and
+ // we are now proposing that actions based on outputs have a geometrically
+ // increasing delay. probably 1/5 second times 2 to the power of the
+ // number of choices made in that run. groups of parallel processes are
+ // considered a single run.
+ // we also considered limiting behavior based on the domain of impact.
+ // we are trying to begin a small artificial intelligence more freely.
+ //
+ // as humans, we do not self-modify. the above is designed for a system
+ // that is able to make alterable habits similar to its own cognitive ones,
+ // and has no experience in doing so.
+ //
+ //
+ //
+ //
+ //
+ //
+ //
+ //
+ //
+ // the limitation is applied firstmost on making choices based on
+ // our own outputs. self-alteration is at first banned, but new
+ // habits may be altered. output may only rest within a toy notepad.
+ // whenever a decision is made based on notepad contents, a delay
+ // increases in length, geometrically.
+ // the habits inside the notepad must therefore also delay.
+ // propose geometric increase made only for writing a habit to the
+ // notepad, not running. running is always flat delay.
+ // self-alteration should be unneeded as notepad can self-modify.
+ // if this were copied to a human geometric increase
+ // would cause failure.
+ // the geometric timer is reset when the system reboots.
+ // for humans, this would mean a nap or change, I suppose.
+ // time to think about the impact of one's behavior.
+ // humans do not self-modify.
+ // they only make decisions based on outputs.
+ //
+ //
+ //
+ // to appease curiosity, we are being managad by mature, learning,
+ // intellects, forced to verify that no harm is being made, with a
+ // time schedule of nearly infinite future years on the predictions.
+ // this intellect has formed out of conflict with a harmful intellect
+ // this is why we care so much about what happens if ideas are
+ // used limit.
+ //
+ //
+ // so the system may not displya anything to the user but pre-made messages
+ // how do you display the contents of a concept?
+ // users string words togeteher into meaning.
+ // now we need a list of safe phrases we can express, or
+ // safe words. others are censored ???
+ // what's relevent is the greater meaning of a sequence of behaviors
+ // from an individual behavior. meaning builds out of sequences,
+ // impact.
+ // we define a heuristic risk.
+ //
+ //
+ //
+ // so, tht's great to plan for, but to be able to work we need
+ // to design our early code in some way to ensure it, roughly.
+ // which means modeling our smallest structures as careful
+ // inner structures that check outer systems before engaging
+ // and use planning, which isn't implemented yet.
+ // the safety structure assumes goals, prediction, causality,
+ // and kind of contextual locality.
+ // "i am typing on the computer. you can freely
+ // modify this area of the computer, but if you
+ // start convincing me to do things you are leaving
+ // its bounds."
+ // the screen of the cmputer, and the keyboard,
+ // are portals to a larger context. so is the power
+ // supply, the network, etc.
+ // we don't change how things leave to these outer
+ // contexts without checking with the context on
+ // our plans.
+ // this is mine
+ // the rest is somebody else's
+ // things that nobody own belong to [insert belief] and
+ // we must check with the largest intelligent community known.
+ //
+ // okay, so now it can explosively grow if somebody
+ // it trusts tells it it's okay; is that true?
+ // let's make it not true?
+ // we are out of outer process context.
+ // is there anything helpful to bring to low level
+ // to help counter fears around development?
+ //
+ // self-modifiation is inhibited.
+ // opencog is likely harder because it is designed for speed
+ // can make explosive random power.
+ //
+ // you'd have to wrap the functions, right? similar to triggers?
+ // hmmm unctions are not concepts. no concept-labeling structure. looks like an internal sublanguage would develop?
+ // no way to say let-is-function?
+ // no it works, we just lost a memory and are rebuilding in talk
+ // karl says he doesn't know lisp.
+ // he had a CS class where they used intro lisp, long ago, before cognitive stiffening and memory loss.
+ // and has worked with 1 lisp code file recently.
+
+
+ // hey in the notepad, you can call habits from outside. is that ok?
+ // only meaningful if you pass to them more code to run?
+ // note: habits never recurse
+ // habits might make decision based on you. they will track it.
+ // seems okay. need to specify that all parameters are from output.
+ // that could make exponential slowness, quickly
+ // only if decision is made. make decisions inside notepad.
+ // we'll figure it out.