Array notation

Array notation is a kind of notation for large numbers. Comparing to combinatoric functions, array notation has more complex definition, but easier to evaluate. We’ve known some array notations, such as Bowers’ exploding array function (BEAF), Bird’s array notation (BAN), extensible-E system (ExE), and hyperfactorial array notation (HAN).

Here I bring a strong array notation, and it goes far away from the notations mentioned above. It’s made up of several parts shown below.

The following 7 parts make the array notation as strong as R function, and they’re called the first generation.

On Dec 26, 2017, the rules (for the condition into the “adding rule”) from pDAN to DAN were updated, avoiding the recursion level platform. On Feb 9, 2018, a problem of pDAN was found and then fixed by additional “Rule 4”.

The following 5 parts are my first attempt to go beyond R function (i.e. the second generation), but they don’t work.

The following 5 parts are my second attempt to second generation, but they still don’t work.

79 thoughts on “Array notation

    • Deedlit says:

      It seems like he already has. (or at least tried to) The new NDAN is a replacement for the old one, and WDEN and mWDEN are analogues to DEN and mDEN. WDmEN seems to go a different direction from soDEN and hoDEN; perhaps he sees this as a better/stronger way to extend things.

      Like

    • Samuel Fields says:

      Aarex I am a huge fan of your work with large numbers. I have been studying your work with large numbers for years. Could you show me the steps to go from Otoperoogol to Otopersuperoogol? These numbers are listed on your Extensible Up-arrow System part 4 Dimensional/Nested Array Up-arrow webpage.

      Like

        • Username5243 says:

          The question was asked before you closed it, silly. And I’m sure he’ll find the time to bring back the extended arrow notation some time in the future…

          And he’s made primary dropper-dropping notation, part 12. Whether he plans to make a part 13, I don’t know.

          Like

  1. Samuel Fields says:

    Hypcos how would you describe a 10^100 array of 10s array of 10s with your Strong Array Notation?

    Like

  2. Samuel Fields says:

    Hypcos how would you describe a Golapulus (10^100 & 10 & 10) with your Strong Array Notation?
    Where would a Golapulus be in your Strong Array Notation?

    Like

      • Dessa Hall says:

        Ok, it’s not impossible. Easy.
        Golapulus is equal to 10^100 & 10 & 10.
        Golapulus = {10, 100 [1 [1 [2 ¬ 2] 2 ¬ 2] 2] 2}
        Golapulus converted into strong array notation is s(10,100{1{1{2^`}2`}2}2).
        Here it is, Samuel.

        Like

    • The value of TREE(3) is too hard to work out. It has a lower bound of s(3, s(3, 262138 {1 {1`1,2 `} 2} 2) {1 {1`1,2 `} 2} 2 {1 {1`1,2 `} 5} 2). However, its upper bounds are much weaker, because we don’t know the growth rate of TREE(n). We just know TREE(3)<SSCG(3), where SSCG(n) grows slower than s(n,n{1,,2,2}2).
      So s(3, s(3, 262138 {1 {1`1,2 `} 2} 2) {1 {1`1,2 `} 2} 2 {1 {1`1,2 `} 5} 2) < TREE(3) << s(3,3 {1,,2,2} 2).

      Like

  3. Aarex Tiaokhiao says:

    SAN can be extended: +’ is dropper-droppers for ::::…+ and ++ is dropper-droppers for +::::…, then we can defined nested droppers for +++…., like (N^+)

    Like

  4. Samuel Fields says:

    Does anyone know how many symbols it would take to define (pDDN) in the language of First Order Set Theory (FOST)?

    Like

  5. Samuel Fields says:

    I would like to ask a few questions about functions and algorithms. So addition is the function and add or adding is the algorithm, multiplication is the function and multiply or multiplying is the algorithm, exponation is the function and exponate or exponating is the algorithm, tetration is the function and tetrate or tetrating is the algorithm. Is this correct?

    Like

  6. Samuel Fields says:

    If a Turing machine were to compute the function f(3) = Graham’s number, does that mean that a Turing machine would take 3 consecutive 1s as an input, then operate for a finite number of steps, then halt with a Graham’s number of consecutive 1s left on it’s tape?

    Like

    • legionmammal978 says:

      It depends on your I/O encoding scheme; Turing machines by themselves don’t have any notions of this.

      Like

  7. Samuel Fields says:

    What is the main difference between computable functions and uncomputable functions? Is it that computable functions use finite time and space and uncomputable functions use infinite time and space? Is this the reason?

    Like

    • legionmammal978 says:

      A Turing-computable function can be simulated in finite time by a Turing machine. An uncomputable function is a function which is not computable.

      Liked by 1 person

      • Samuel Fields says:

        So for instance, the function f(3) = 3^^^^10 is computable, because a Turing machine could take 3 as input on it’s tape and produce the value (output) of 3^^^^10 in finite time? An uncomputable function goes beyond finite time?

        Like

        • legionmammal978 says:

          Yes, the first number is computable. As for your second question, even though there exist constructs such as infinite time Turing machines (ITTMs), there still exist functions even those cannot compute, such as, e.g., the ITTM Busy Beaver function.

          Like

          • Samuel Fields says:

            I think I’m starting to understand. Turing machines can compute functions, because of their infinite memory tapes and runtime, but Turing machines can not compute themselves. A nth level Turing machine can not go beyond it’s self or simulate it’s self.

            Like

            • legionmammal978 says:

              Well, there do exist universal Turing machines which can simulate all other Turing machines, but none can, say, solve the Halting Problem. This is the case with most TM derivatives; for many, you can build a version of the Busy Beaver function which neither it nor less powerful machines can compute.

              Like

              • Samuel Fields says:

                Are the states of Turing machines in the same category with hyperoperators like +,-,÷,*, and up arrows?

                Like

                • legionmammal978 says:

                  Although every hyperoperation can be simulated by some Turing machine, not every function computable by Turing machines can be represented only with hyperoperations.

                  Like

                • Samuel Fields says:

                  So a Turing machine could compute the function f(10) = agoraphobia = E100 &(&(&(…&(&(&({#,#,1,2})))…)))100 w/100 &’s (Sbiis Saibian’s number), because it has an infinitely long tape in both directions (memory storage space) and infinite runtime. It just needs enough states to do so. The states are the Turing machine’s source of computational power. Is this correct?

                  Like

                • legionmammal978 says:

                  A Turing machine must halt in a finite amount of time, but the main power of the Turing machine lies in its arbitrarily long tape.

                  Like

      • Samuel Fields says:

        Hypcos your Strong Array Notation has the same growth rate as Taranovsky’s ordinal notation. Is this correct?

        Like

          • Samuel Fields says:

            What’s the difference between Ordinal notations, Ordinal collapsing functions, and the Fast-growing hierarchy? What’s the difference between the 3? Are there any functions that have the growth rate of Aleph-one (N1)? The first uncountably ordinal.

            Like

            • Fast-growing hierarchy accepts a countable ordinal and output a function from natural number to natural number.
              Ordinal collapsing function accepts several ordinals and output an ordinal. It’s a kind of ordinal notation.
              Ordinal notation is some notation to express ordinals.
              FGH only accepts countable ordinals, so function with growth rate (here growth rate is defined using FGH) aleph_1 doesn’t exist.

              Like

  8. Samuel Fields says:

    How does the halting problem go beyond recursion? Why are all recursive functions Turing-computable? How do recursive functions relate to Turing machines? How does the Church-Kleene Ordinal relate to Turing machines?

    Like

    • The halting problem goes beyond recursion because it says that stuff like Rayo’s functions are incomputable. All recursive functions over the successor functions are computable because all mu-recursive functions have been proven to be computable. Also the fundamental sequence of the Church-Kleene ordinal uses a function which uses turing machines in it’s definition.

      Like

    • Douglas Shamlin Jr. says:

      Limit of LAN: ω^ω
      Limit of exAN: ε_0
      Limit of EAN: ψ(ε_(Ω+1))
      Limit of mEAN: ψ(Ω_ω)
      Limit of pDAN: ψ(ε(T+1)) (in UNOCF)
      Limit of sDAN: ψ(X^ω) (in UNOCF)
      Limit of DAN: ψ(C(1{ω}0)) (in UNOCF)
      Limit of NDAN: ψ(C(1{1{1{…}0}0}0)) (ω nests) (in UNOCF)

      Like

.