Study Guide 2023+

Computer Science and Programming Notes

Warning: These notes are partial, ongoing, incomplete, and may contain typos/inaccuracies. (They are kept factually accurate, time permitting.)

They are being united from many disparate notes created in the past and the layout/organization will gradually improve with time!

Please view them on a computer as they are not optimized for mobile (although you can still view them on Mobile along with the Flashcards at your own risk)!

Introduction: General Comments

Some techniques I've been studying along with an assortment of answers from (or links to) a variety of practice/testing sites (2023+).

Profiles

  1. https://www.credly.com/users/adam-gerard
  2. https://www.coursera.org/user/16293d668e9feccdd20df40f3bf2031e
  3. https://www.hackerrank.com/KardashevScale?hr_r=1
  4. https://www.codewars.com/users/Thoughtscript
  5. https://stackoverflow.com/users/4955304/adam-gerard
  6. https://leetcode.com/Thoughtscript/
  7. https://projecteuler.net/profile/Thoughtscript.png

Blog

  1. https://www.thoughtscript.io/

Resume

  1. https://render-static-fs.onrender.com/Plain_Resume_2023.html
  2. https://www.linkedin.com/in/adamintaegerard/
  3. https://www.thoughtscript.io/portfolio.html

Flashcards

  1. https://render-static-fs.onrender.com/flash_cards_2023.html

Note:

Use the `?topics=` HTTP parameter to narrow the displayed flash cards down. 

Multiple topics can be selected at a time:

* `?topics=java,javascript`
* `?topics=sql,ruby`

Notes

All ranked Solved Examples are top 50% time-complexity or top 50% space-complexity for at least one valid run (- given the significant variance across runs. Note: this applies primarily to LeetCode -) and original unless otherwise noted. (I'm flagging those for further review.)

Algorithms: Big O Notation

Time Complexity

  1. Specifies the relationship between the number of operations performed across different Input Sizes for a Function. (Which in turn impacts the total amount of time a Function may take to run to completion.)
    • It's convenient to think of Time Complexity as successively running a Function with increasing or decreasing Input Sizes.
    • In practice, most computers run so quickly that there's no discernible difference in the amount of time to any kind of Function for small Input Sizes.
    • Time Complexity (e.g. Asymptotic Analysis) instead is concerned with the abstract mathematical relationships between a Function, its Control Flow, the number of operations it performs, the Input Size, and its Computability.
  2. By convention, Average Big O Time Complexity almost always indicates the worst-case time complexity.
    • It's also standard practice to drop multiples e.g. - O(2n) is usually just written as O(n).
  3. Time Complexity is almost always measured using Big O Notation.

Big O Notation

Given an input n, calculate, a priori, abstracting away certain physical implementation specifics, the time it takes to execute an algorithm relative to n. By convention, Big O Notation typically measures only the worst-case time complexity (upper bound).

Formal Definition

The best formal definition I’ve found:

  1. Where f, g are functions from N to the positive reals, R+; and
  2. Where O(g) is a set of functions from N to the positive reals, R+; and
  3. Where stands for the elemental inclusion operator:
    • f ∈ O(g) means: asymptotically and ignoring constant factors, f does not grow more quickly than g over time or input size.
    • f ∈ O(g) means: asymptotically and ignoring constant factors, g is the worst-case time complexity for f over time or input size.

Refer to: http://www.uni-forst.gwdg.de/~wkurth/cb/html/cs1_v06.pdf

Constant Time

Invariant, constant or unchanging, Time-Complexity. The number of operations performed remains constant and does not depend on the Input Size:

Linear Time

Time-Complexity is Directly Proportional to (the number of Inputs) n:

Logarithmic Time

The Time Complexity Logarithmically (gradually and varyingly) decreases (or increases) relative to change in Input Size, Not Constant, Linear, or Exponential.

Logarithms and Logarithmic Functions

  1. log specifies how much, n, the Base, β is raised to in order to obtain some x (e.g. - finds the Exponent given βⁿ = x):

    • logᵦ(x) = n if and only if (βⁿ = x)
    • (log baseᵦ x = n) if and only if (βⁿ = x)
    • E.g. - (log base₂ 64 = 6) if and only if (2⁶ = 64)
    • E.g. - (log₂ 64 = 6) if and only if (2⁶ = 64)
  2. log(x) = y is written as shorthand for an assumed or suppressed Base β:

    • E.g. - when using Base 10 log₁₀(x) = y .
    • When using Big O Notation, the assumed Base is usually taken to be Base 2 (Binary, base₂, 2, or log₂(x) = y).

https://en.wikipedia.org/wiki/Logarithm

Quadratic Time

Time-Complexity increases Exponentially with respect to (the number of inputs) n.

Calculations

Big O calculations can be combined:

Ambiguity and Equivocation

There are at least three equivocations that are commonly encountered regarding the above:

  1. Imprecision in computing exact Time Complexity. For example:
    • A Function may be bounded by a range of Input Sizes so that the result is actually Constant (for that range of Input Sizes) but will nevertheless be treated as some other kind of Time Complexity (or vice-versa, that a Function may be more properly classified by some other label but will be considered say Constant). Consider if the Logarithmic Function example above were bounded to Input Sizes 4:6).
    • For many Functions and Input Sizes, some XN = N^Y. Functions with that property may nevertheless be described by different Big O Time Complexities.
    • It's standard convention to omit Big O Notation multiples: O(2n) is usually just written as O(n).
  2. Quasilinear Time, Sublinear Time, and the like are usually ignored (and Big O Notation is essentially "rounded" to one of the four categories described above).
  3. "Inner" and "Outer" Time Complexity are often equivocated:
    • "Inner" - an Input is passed as a Parameter or Argument to a Function. Many Functions will alter this Value (in-place, for example). Some such Values are modified within a Function, each loop through a Function has a certain number of operations to be performed, a Function is a set of Functions, etc.
    • "Outer" - consideration of a Function's Time Complexity by comparison of different Input Sizes to their total number of operations (per the above).
  1. https://en.wikipedia.org/wiki/Logarithm
  2. https://www.thoughtscript.io/algos/0000000000003
  3. http://www.uni-forst.gwdg.de/~wkurth/cb/html/cs1_v06.pdf
  4. https://en.wikipedia.org/wiki/Time_complexity

Algorithms: Space Complexity

There’s a lot more ambiguity and disagreement about what constitutes what with respect to Space Complexity: https://www.studytonight.com/data-structures/space-complexity-of-algorithms observes that:

“Sometimes Auxiliary Space is confused with Space Complexity. But Auxiliary Space is the extra space or the temporary space used by the algorithm during it's execution.”

Furthermore, small Input Sizes are often just called O(1) (despite being strictly Linear): https://stackoverflow.com/questions/44430974/space-complexity-of-an-array.

Big O Notation

Is similar to Big O Notation for Time Complexity.

Algorithms: Famous Algorithms

Minimum Spanning Tree

Kruskal’s Algorithm

Prim’s Algorithm

Dijkstra's Algorithm

Heap’s Algorithm

// Heap's Algorithm
// https://stackoverflow.com/questions/27539223/permutations-via-heaps-algorithm-with-a-mystery-comma#27540053
function permutation(originalArr) {
  let transferArr = originalArr, resultsAsStrings = []

  function swap(end, begin) {
    let original = transferArr[begin]
    transferArr[begin] = transferArr[end]
    transferArr[end] = original
  }

  function heaps_algorithm(n) {
    if (n === 1) resultsAsStrings.push(transferArr.join(""))

    for (let i = 0; i != n; ++i) {
      heaps_algorithm(n - 1)
      swap(n - 1, n % 2 === 0 ? i : 0)
    }
  }

  heaps_algorithm(originalArr.length)
  return resultsAsStrings
}

const a = [1, 2, 3, 4]
console.log(permutation(a))

const b = [0, 1, 2]
console.log(permutation(b))

const c = [2, 3, 4]
console.log(permutation(c))

Kadane's Algorithm

Given:

int[] input = new int[]{2, 3, -8, 7, -1, 2, 3};
int result = input[0];

for (int i = 0; i < input.length; i++) {
  int current = 0;

  for (int j = i; j < input.length; j++) {
    current += input[j];
    result = Math.max(result, current);
  }
}

//return result;

The Brute-Force implementation O(N²) can be optimzed to O(N):

int result = input[0]; 
int maxEnding = input[0]; 

for (int i = 1; i < input.length; i++) {
  int N = input[i];

  maxEnding = Math.max(maxEnding + N, N); 
  result = Math.max(maxEnding, result);
}

Observe:

  1. There are three conditions:
    • New highest sum
    • Replacing the last highest sum with current index value
    • Same highest sum
  2. Highests subarray sums are transitive across indicies (except in the second condition where a positive number follows a negative one).
  3. This necessitates two Variables, one (result) that's derived from the other (maxEnding). (I attempted to reduce this to a single multiple Max expression with a single Variable and was unable to optimize beyond the above.)
  1. https://www.geeksforgeeks.org/largest-sum-contiguous-subarray
  2. https://www.math.umd.edu/~immortal/CMSC351/notes/maxcontiguoussum.pdf

Algorithms: Some Fun Ones

Some fun algorithms I've encountered from here and there.

Implement Lodash Curry

Challenge: implement Lodash .curry() from scratch. Official Code snippet.

function W(func) {
    // Key insight here is to use an aux buffer to store args into.
    let resolvedArgs = []
    return C(func, resolvedArgs)
}

function C(func, resolvedArgs, arity=func.length) {
    // Return an anon function with ...args here
    return function(...args) {
        resolvedArgs = [...resolvedArgs, ...args]
        // This is the trickiest part conceptually: recursively curry here
        if (args.length < arity) return C(func, resolvedArgs, arity-args.length)
        return func(...resolvedArgs)
  }
}

function T(a,b,c) {
    return [a,b,c]
}

const F = W(T)

console.log(T("A","B","C"))
console.log(F("A", "B", "C"))
console.log(F("A")("B", "C"))
console.log(F("A", "B")("C"))
console.log(F("A")("B")("C"))
["A", "B", "C"]
["A", "B", "C"]
["A", "B", "C"]
["A", "B", "C"]
["A", "B", "C"]

Flatten Without Flatten

Flatten an Array by a specified amount without using .flat():

var flatten = function (arr, flatten_by) {
    console.log(`Input: ${JSON.stringify(arr)} flatten_by: ${flatten_by}`)

    if (flatten_by === 0) return arr

    let str_arr = JSON.stringify(arr),
        result = "[",
        f_rem = flatten_by

    for (let i = 1; i < str_arr.length - 1; i++) {
        const C = str_arr[i]

        if (C === "[") {
            f_rem--
            // Balance checks here aren't symmetrical
            if (f_rem < 0) result += C

        } else if (C === "]") {
            f_rem++
            if (f_rem <= 0) result += C

        } else {
            result += C

        }
    }

    result += "]"
    console.log(`Output: ${result}`)
    return JSON.parse(result)
}

console.log(flatten([0, 1, 2, [3, 4]], 0))
console.log(flatten([0, 1, 2, [3, 4]], 1))
console.log(flatten([0, 1, 2, [[3, 4]]], 1))
console.log(flatten([0, 1, 2, [[3, 4]]], 2))
console.log(flatten([0, 1, [[[2, 3, 4]]]], 1))
console.log(flatten([[0, 1], 2, [[3, 4]]], 2))
console.log(flatten([[1], 1, [1], 1, [1], 1], 1))
console.log(flatten([[1], 1, [1], 1, [1], 1], 9))
console.log(flatten([[1], 1, [1], 1, [1], 1], 0))
console.log(flatten([[[[[[[[[[[[[[[[[9]]]]]]]]]]]]]]]]], 8))
console.log(flatten([[[[[[[[[[[[[[[[[9]]]]]]]]]]]]]]]]], 18))
"Input: [0,1,2,[3,4]] flatten_by: 0"
[0, 1, 2, [3, 4]]
"Input: [0,1,2,[3,4]] flatten_by: 1"
"Output: [0,1,2,3,4]"
[0, 1, 2, 3, 4]
"Input: [0,1,2,[[3,4]]] flatten_by: 1"
"Output: [0,1,2,[3,4]]"
[0, 1, 2, [3, 4]]
"Input: [0,1,2,[[3,4]]] flatten_by: 2"
"Output: [0,1,2,3,4]"
[0, 1, 2, 3, 4]
"Input: [0,1,[[[2,3,4]]]] flatten_by: 1"
"Output: [0,1,[[2,3,4]]]"
[0, 1, [[2, 3, 4]]]
"Input: [[0,1],2,[[3,4]]] flatten_by: 2"
"Output: [0,1,2,3,4]"
[0, 1, 2, 3, 4]
"Input: [[1],1,[1],1,[1],1] flatten_by: 1"
"Output: [1,1,1,1,1,1]"
[1, 1, 1, 1, 1, 1]
"Input: [[1],1,[1],1,[1],1] flatten_by: 9"
"Output: [1,1,1,1,1,1]"
[1, 1, 1, 1, 1, 1]
"Input: [[1],1,[1],1,[1],1] flatten_by: 0"
[[1], 1, [1], 1, [1], 1]
"Input: [[[[[[[[[[[[[[[[[9]]]]]]]]]]]]]]]]] flatten_by: 8"
"Output: [[[[[[[[[9]]]]]]]]]"
[[[[[[[[[9]]]]]]]]]
"Input: [[[[[[[[[[[[[[[[[9]]]]]]]]]]]]]]]]] flatten_by: 18"
"Output: [9]"
[9]

Algorithms: Search

A Needle refers to the Value, item, or Object one is searching for within a Haystack (the collection of items one is searching within).

Search Techniques

Breadth First Search (BFS) - graph search, sequential horizontal search (by row or width) of a node.

Depth First Search (DFS) - graph search, sequential vertical search (by column or full height) of a branch.

Sorted by Average Big O Time Complexity.

Hash Table - define a hashing method, each value is assigned a hash key that is unique to that value, then lookup a value using the hash key.

const hash = (needle, haystack) => {
  console.log((`Find: ${needle} in [${haystack}]`, "hashstart")

  let table = []
  for (let i = 0; i < haystack.length; i++) {
    table[i] = -1
  }

  let keyHash = 0

  const simpleHash = val => val % table.length

  for (let i = 0; i < haystack.length; i++) {
    let flag = haystack[i] == needle
    const currentHash = simpleHash(haystack[i])

    if (table[currentHash] === -1) {
      table[currentHash] = haystack[i]
      if (flag) keyHash = currentHash

    } else {
      const NEXT = table.indexOf(-1)
      table[NEXT] = haystack[i]
      if (flag) keyHash = NEXT

    }
  }

  console.log((`Hash Table: [${table}] - Length: ${table.length}`)

  const BEGIN = new Date()
  const result = table[keyHash]
  const indexOf = haystack.indexOf(needle)
  const END = new Date()

  console.log((`Hash: ${keyHash} - Value: ${result} - IndexOf ${indexOf} in unsorted haystack`)
  console.log(`Time taken: ${END - BEGIN}`)
}

Linear - iterate through n until k is found.

const linear = (needle, haystack) => {
  const BEGIN = new Date()
  let result = -1

  console.log(`Find: ${needle} in [${haystack}]`)

  for (let i = 0; i < haystack.length; i++) {
    if (needle === haystack[i]) {
      result = i;
      break;
    }
  }

  const indexOf = haystack.indexOf(needle);
  const END = new Date()
  console.log(`Index: ${result} - IndexOf: ${indexOf}`)
  console.log(`Time taken: ${END - BEGIN}`)
}

Binary- take a sorted Array and recursively divide in two comparing the Needle against values in each divided Array.

const binary = (needle, haystack) => {
  haystack.sort((a, b) => a - b)
  console.log(`Sorted haystack: ${haystack}`)

  let l = 0, r = haystack.length - 1

  while (l <= r) {
    const M = Math.floor((l + r) / 2)

    if (haystack[M] === needle) {
      console.log(`${haystack[M]} at sorted index ${M}`)
      break
    } 

    if (haystack[M] < needle) l = M + 1

    // haystack[M] > needle
    else r = M - 1
  }
}

Consult a clean implementation here: https://www.instagram.com/reel/CYjmjbFF_8g/?utm_source=ig_web_copy_link

  1. https://www.instagram.com/reel/CYjmjbFF_8g/?utm_source=ig_web_copy_link

Algorithms: Sort

Sorted by Average Big O Time Complexity.

Non-Comparison Sorts

Insertion Sort - use a second array to rebuild the first array in sorted order, loop through inserting the next lowest value into the second array.

// Two arrays - lowest from original goes into next open spot in result
const insertion = arr => {
  console.log(`Starting array: ${arr} - Length: ${arr.length}`)
  const BEGIN = new Date()

  /** Algorithm begins */
  const inner = (arr, result = []) => {
    let lowest = arr[0], lowestIndex = 0
    const L = arr.length

    for (let i = 0; i < L; i++) {
      if (arr[i] < lowest) {
        lowest = arr[i]
        lowestIndex = i
      }
    }

    result.push(lowest)

    if (lowestIndex === 0) arr = arr.slice(lowestIndex, L)

    if (lowestIndex === L - 1) arr = arr.slice(0, lowestIndex)

    if (lowestIndex < L) {
      const first = arr.slice(0, lowestIndex)
      const second = arr.slice(lowestIndex + 1, L)
      arr = first.concat(second)
    }

    if (L > 0) return inner(arr, result)
    else return result
  }
  let result = inner(arr)
  /** Algorithm ends */

  const END = new Date()
  console.log(`Ending array: ${result} - Length: ${result.length} - Time: ${END - BEGIN}`)
}

Counting Sort - sort by statistical frequency using a buffer. (Frequency Sort)

// Sort by frequency
const bucket = arr => {
  let freq = [], result = [], max = arr[0]

  for (let i = 0; i < arr.length; i++) {
    if (max < arr[i]) max = arr[i]
  }

  for (let i = 0; i < max + 1; i++) {
    freq[i] = 0
  }

  console.log(`Starting array: ${arr} - Length: ${arr.length}`)
  const BEGIN = new Date()

  /** Algorithm begins */
  for (let i = 0; i < arr.length; i++) {
    freq[arr[i]] = freq[arr[i]] + 1
  }

  console.log(`Frequency array: ${freq}`, "bucketfreq")
  for (let i = 0; i < freq.length; i++) {
    for (let j = 0; j < freq[i]; j++) {
      result.push(i)
    }
  }
  /** Algorithm ends */

  const END = new Date()
  console.log(`Ending array: ${result} - Length: ${result.length} - Time: ${END - BEGIN}`)
}

Bucket Sort- sort an array using buckets (that divide across the range of the input array), can be a Counting Sort and/or Insertion Sort depending on setup.

Comparison Sorts

Heap Sort - properly sorts an array into a Binary Heap.

Review: https://github.com/Thoughtscript/cplusplus-coursera/blob/master/course/12%20-%20Heap/main.cpp

Merge Sort - divide an array into halves until two elements exist in each array then recombine comparing elements along the way.

// Divide an array into halves until two elements exist in each array then recombine comparing elements along the way
const mergesort = arr => {
  const BEGIN = new Date()
  console.log(`Starting array: ${arr} - Length: ${arr.length}`)

  /** Algorithm begin */
  const merge = (a, b) => {
    let i = 0, j = 0, results = []

    while (i < a.length && j < b.length) {
      if (a[i] < b[j]) {
        results.push(a[i])
        i++

      } else {
        results.push(b[j])
        j++
      }
    }

    while (i < a.length) {
      results.push(a[i])
      i++
    }

    while (j < b.length) {
      results.push(b[j])
      j++
    }

    return results
  }

  const innerMerge = arr => {
    if (arr.length <= 1) return arr
    const MID = Math.floor(arr.length / 2)
    const LEFT = innerMerge(arr.slice(0, MID))
    const RIGHT = innerMerge(arr.slice(MID))
    return merge(LEFT, RIGHT)
  }

  const result = innerMerge(arr)
  /** Algorithm ends */

  const END = new Date()
  console.log(`Ending array: ${result} - Length: ${result.length} - Time: ${END - BEGIN}`)
}

Review: https://medium.com/analytics-vidhya/implement-merge-sort-algorithm-in-javascript-7402b7271887

Quick Sort - specify a pivot point and swap values that are above and below the pivot point.

// Specify a pivot point and swap
const quicksort = arr => {
  let result = []
  console.log(`Starting array: ${arr} - Length: ${arr.length}`)
  const BEGIN = new Date()

  /** Algorithm begin */
  const inner = (arr, pivot, pointer) => {
    const L = arr.length
    let rArr = [], lArr = [], flag = false

    for (let i = 0; i < L; i++) {
      if (arr[i] > arr[pivot]) rArr.push(arr[i])
      else lArr.push(arr[i])

      const rl = rArr.length, ll = lArr.length
      if (!flag && (ll > 1 || rl > 1)) {
        if (lArr[ll - 1] < lArr[ll - 2]) flag = true
        if (rArr[rl - 1] < rArr[rl - 2]) flag = true
      }
    }

    arr = lArr.concat(rArr)
    pointer++
    if (pointer > L - 1 || !flag) return arr
    else {
      let right = pointer < Math.floor(L / 2) + 1 ? inner(arr, L - pointer, pointer) : arr
      return Math.floor(pivot / 2) > 1 ? inner(right, Math.floor(pivot / 2), pointer) : arr
    }
  }
  result = inner(arr, arr.length - 1, 0)
  /** Algorithm ends */

  const END = new Date()
  console.log(`Ending array: ${result} - Length: ${result.length} - Time: ${END - BEGIN}`)
}

Bubble Sort - loop through replacing sequential pairs (Swap) as necessary.

// Loop through replacing sequential pairs as needed
const bubble = arr => {
  console.log(`Starting array: ${arr} - Length: ${arr.length}`)
  const BEGIN = new Date()

  /** Algorithm begins */
  for (let i = 0; i < arr.length - 1; i) {
    if (arr[i] > arr[i + 1]) {
      const original = arr[i]
      arr[i] = arr[i + 1]
      arr[i + 1] = original
      i = 0
    } else i++
  }
  /** Algorithm ends */

  const END = new Date()
  console.log(`Ending array: ${arr} - Length: ${arr.length} - Time: ${END - BEGIN}`)
}

Selection Sort - loop through swapping the lowest number with the current index.

// Loop through swapping the lowest number
const selection = arr => {
  console.log(`Starting array: ${arr} - Length: ${arr.length}`)
  const BEGIN = new Date()

  /** Algorithm begins */
  const inner = (arr, lastIndex = 0) => {
    let lowest = arr[lastIndex], lowestIndex = lastIndex, flag = false
    for (let i = lastIndex; i < arr.length; i++) {
      if (arr[i] <= lowest) {
        lowest = arr[i]
        lowestIndex = i
        flag = true
      }
    }

    if (flag) {
      const original = arr[lastIndex]
      arr[lastIndex] = lowest
      arr[lowestIndex] = original
      lastIndex++
      return inner(arr, lastIndex)
    } else return arr
  }

  let result = inner(arr)
  /** Algorithm ends */

  const END = new Date()
  console.log(`Ending array: ${result} - Length: ${result.length} - Time: ${END - BEGIN}`)
}

Language Implementations

  1. Java uses a modified Merge Sort (by default).
  2. JavaScript uses a modified Quick Sort with Insertion Sort fallback (by default).
  1. https://v8.dev/blog/array-sort
  2. https://tc39.es/ecma262/multipage/indexed-collections.html#sec-array.prototype.sort
  3. https://docs.oracle.com/javase/6/docs/api/java/util/Collections.html#sort%28java.util.List%29
  4. https://medium.com/analytics-vidhya/implement-merge-sort-algorithm-in-javascript-7402b7271887

Algorithms: Recursion

Where a function repeatedly calls itself until some condition is met.

Solved Examples

  1. https://leetcode.com/problems/permutations/
  2. https://leetcode.com/problems/fibonacci-number/
  3. https://leetcode.com/problems/keys-and-rooms/
  4. https://leetcode.com/problems/deepest-leaves-sum/
  5. https://leetcode.com/problems/minimum-add-to-make-parentheses-valid/

Algorithms: Loop Recursion

A kind of Recursion problem where Recursion is called within a For-loop.

Useful for iterating through all permissible contiguous String subsequence "chunks".

  1. https://www.youtube.com/watch?v=DI6pH9Cx654 <- great example

Algorithms: Dynamic Programming

Breaking a problem down into multiple sub-problems where each step is stored (typically on the fly not Ahead-of-Time).

Often used with Memoization to reduce recursion footprint.

Solved Examples

  1. https://leetcode.com/problems/count-sorted-vowel-strings/ <- Using variables that are saved off each cycle
  2. https://leetcode.com/problems/product-of-array-except-self/ <- Bottom 50% time-complexity. Top .07% space-complexity.
  3. https://leetcode.com/problems/max-increase-to-keep-city-skyline/
  4. https://leetcode.com/problems/arithmetic-subarrays/
  5. https://leetcode.com/problems/pascals-triangle/

Algorithms: Memoization

Saving the state of an instruction set into an Array, Map, Stack, Dequeue, or Queue.

Often eliminates the need to recursively compute some value.

Solved Examples

  1. https://leetcode.com/problems/triangle/
  2. https://leetcode.com/problems/climbing-stairs/ <- Used a precomputed O(1) data set I generated.
  3. https://leetcode.com/problems/count-and-say/ <- Used a precomputed O(1) data set I generated.
  4. https://leetcode.com/problems/longest-nice-substring/

Algorithms: Sets

Power Set

Dynamic Solution:

/**
 * Review: https://stackoverflow.com/questions/24365954/how-to-generate-a-power-set-of-a-given-set
 *
 * Strategy is clean and elegant so I made a few tweaks and implemented it in JS from the above.
 * 
 * Add elements one at a time to a result array, also iterate over the result array adding elements to the existing ones.
 *
 * This accomplished what was done here: https://medium.com/leetcode-patterns/leetcode-pattern-3-backtracking-5d9e5a03dc26 
 * and here: https://jimmy-shen.medium.com/78-subsets-fa4f0b047664 without backtracking (those are excellent articles 
 * and implementations given the requirements!).
 */

const powerSet = arrSet => {
  let arrSets = []
  for (let i = 0; i < arrSet.length; i++) {
    let item = arrSet[i], origSets = [...arrSets]

    for (let j = 0; j < origSets.length; j++) {
      let nSet = origSets[j].concat(item)
      arrSets.push(nSet)
    }
    arrSets.push([item])
  }
  arrSets.push([])
  return arrSets.sort()
}

window.onload = () => {
  console.log(powerSet([4, 2, 3]))
  console.log(powerSet([5, 1, 4, 2, 3]))
}
[
  [],          [ 2 ],
  [ 2, 3 ],    [ 3 ],
  [ 4 ],       [ 4, 2 ],
  [ 4, 2, 3 ], [ 4, 3 ]
]

[
  [],             [ 1 ],
  [ 1, 2 ],       [ 1, 2, 3 ],
  [ 1, 3 ],       [ 1, 4 ],
  [ 1, 4, 2 ],    [ 1, 4, 2, 3 ],
  [ 1, 4, 3 ],    [ 2 ],
  [ 2, 3 ],       [ 3 ],
  [ 4 ],          [ 4, 2 ],
  [ 4, 2, 3 ],    [ 4, 3 ],
  [ 5 ],          [ 5, 1 ],
  [ 5, 1, 2 ],    [ 5, 1, 2, 3 ],
  [ 5, 1, 3 ],    [ 5, 1, 4 ],
  [ 5, 1, 4, 2 ], [ 5, 1, 4, 2, 3 ],
  [ 5, 1, 4, 3 ], [ 5, 2 ],
  [ 5, 2, 3 ],    [ 5, 3 ],
  [ 5, 4 ],       [ 5, 4, 2 ],
  [ 5, 4, 2, 3 ], [ 5, 4, 3 ]
]

Greedy Solution:

/**
 * One can approach this from a few different angles:
 * 1. Brute force.
 * 2. Binary counter.
 * 3. Use native abstractions.
 * ---
 * I'd prefer to use what I understand to be a novel approach by generating 
 * constraints AOT (ahead-of_time) so that power-set 
 * generation is linear for every subsequent run.
 * ---
 * We can then use these values to populate others by index (e.g. - 1 means in the subset, 0 not).
 * The upside is that this is superfast (most powerset implementations 
 * involve deep recursion and 2-3 nested loops).
 * ----
 * The downside of this approach is that only works if you know the desired 
 * set size beforehand. It's Greedy.
 */

const generateConstraints = () => {
  let result = [], current = []

  // Flattened nested loop...
  for (let i = 0, j = 0, k = 0, l = 0; i < 2 && j < 2 && k < 2 && l < 2; i) {
    current.push(i)
    current.push(j)
    current.push(k)
    current.push(l)
    result.push(current)
    current = []

    l++;
    if (l == 2) {
      k++
      l = 0
    }
    if (k == 2) {
      j++
      k = 0
    }
    if (j == 2) {
      i++
      j = 0
    }
  }

  return result
}

window.onload = () => {
  const aot = generateConstraints()
  console.log(aot)
}
[
  [],          [ 2 ],
  [ 2, 3 ],    [ 3 ],
  [ 4 ],       [ 4, 2 ],
  [ 4, 2, 3 ], [ 4, 3 ]
]

[
  [],             [ 1 ],
  [ 1, 2 ],       [ 1, 2, 3 ],
  [ 1, 3 ],       [ 1, 4 ],
  [ 1, 4, 2 ],    [ 1, 4, 2, 3 ],
  [ 1, 4, 3 ],    [ 2 ],
  [ 2, 3 ],       [ 3 ],
  [ 4 ],          [ 4, 2 ],
  [ 4, 2, 3 ],    [ 4, 3 ],
  [ 5 ],          [ 5, 1 ],
  [ 5, 1, 2 ],    [ 5, 1, 2, 3 ],
  [ 5, 1, 3 ],    [ 5, 1, 4 ],
  [ 5, 1, 4, 2 ], [ 5, 1, 4, 2, 3 ],
  [ 5, 1, 4, 3 ], [ 5, 2 ],
  [ 5, 2, 3 ],    [ 5, 3 ],
  [ 5, 4 ],       [ 5, 4, 2 ],
  [ 5, 4, 2, 3 ], [ 5, 4, 3 ]
]

Cartesian Multiplication

Original Generalized Dynamic Solution using the above Dynamic Power Set Solution as an inspiration:

const solve = (args) => {
    let last = []
    const L = args.length - 1

    for (let i = 0; i < args[L].length; i++) {
        last.push([args[L][i]])
    }

    for (let i = L - 1; i >= 0; i--) {
        let curr = args[i], temp = []

        for (let j = 0; j < curr.length; j++) {
            for (let k = 0; k < last.length; k++) {
                const v = [...last[k]]
                if (v.indexOf(curr[j]) === -1) v.push(curr[j])
                temp.push(v)
            }
        }

        last = temp
    }

    console.log(last)
} 

window.onload = () => {
  const argsA = [[1,2],[3,4],[5,6]]
  solve(argsA)

  const argsB = [[1,2,3],[3,4],[5,6,15]]
  solve(argsB)

  const argsC = [[1],[1],[1]]
  solve(argsC)
}

Note: the above implementation isn't order preserving (Sets don't preserve order anyway) and assumes no empty Array is passed (the Empty Set is assumed to be present in every Set so we'll just omit all empty Arrays/Empty Set or we would have to add it everywhere).

[
  [5, 3, 1], 
  [6, 3, 1], 
  [5, 4, 1], 
  [6, 4, 1], 
  [5, 3, 2], 
  [6, 3, 2], 
  [5, 4, 2], 
  [6, 4, 2]
]

[
  [5, 3, 1], 
  [6, 3, 1],
  [15, 3, 1], 
  [5, 4, 1], 
  [6, 4, 1], 
  [15, 4, 1], 
  [5, 3, 2], 
  [6, 3, 2], 
  [15, 3, 2], 
  [5, 4, 2], 
  [6, 4, 2], 
  [15, 4, 2],
  [5, 3], 
  [6, 3],
  [15, 3], 
  [5, 4, 3], 
  [6, 4, 3], 
  [15, 4, 3]
]

[
  [1]
]

Algorithms: Hash Maps and Counts

Typically used when a solution requires a unique answer or a deduplicated Set.

Also often used to track the frequency of some value (number of occurrences).

Technique: Hash Map

let M = {}; 

// ...

if (M[x] != undefined) M[x]++; 
else M[x] = 1;

Solved Examples

  1. https://leetcode.com/problems/isomorphic-strings/
  2. https://leetcode.com/problems/word-pattern/description/
  3. https://leetcode.com/problems/divide-array-into-increasing-sequences/
  4. https://leetcode.com/problems/longest-consecutive-sequence/
  5. https://leetcode.com/problems/minimum-number-of-steps-to-make-two-strings-anagram/
  6. https://leetcode.com/problems/check-if-all-characters-have-equal-number-of-occurrences/ <- Bottom 50% time-complexity.

Algorithms: Graph Relationship

Where a series of relationships are expressed using Arrays, Sets, or Maps.

Often a kind of Hash Counting Algorithm that requires Reverse Maps, Dual Mapping, Symmetric Maps and the like. Strong possibility that a solution can be achieved using a Maps of Maps.

Reverse Map

const A = {
    "AA": "BB",
    "CC":"DD"
}

const B = {
    "BB": "AA",
    "DD":"CC"
}

N-Tuple

Demonstrating a specific kind of this problem and how to index the relationships in multiple ways in O(N) time:

const Example = [
    [1, 2, 3], 
    [0, 3, 1], 
    [0, 4, 3], 
    [0, 5, 3], 
]

/*
 Where for each X in Example:
    1. X[1] is related to X[2] through relationship X[0].
    2. Where the relationship of X[1] to X[2] is symmetric (or not depending).
*/

const A = {
    "1": {
        "3": 0
    },
    "2": {
        "3": 1
    },
    "3": {
        "2": 1,
        "1": 0,
        "4": 0,
        "5": 0
    },
    "4": {
        "3": 0
    },
    "5": {
        "3": 0
    }
}
// Map of X[1] to X[2] and X[2] to X[1]

const B = {
    "1": {
        "3": {
            "2": true
        }
        "2": {
            "3": true
        }
    },
    "0": {
        "1": {
            "3": true
        },
        "3": {
            "1": true,
            "4": true,
            "5": true
        },
        "4": {
            "3": true
        },
        "5": {
            "3": true
        }
    }
}

// Map of X[0] to the relationship of X[2] to X[1] (and vice-versa)

Algorithms: Phone Dialer

Questions that use a 0-9 phone pad or phone numbers.

Solved Examples

  1. https://leetcode.com/problems/minimum-number-of-keypresses/
  2. https://www.codewars.com/kata/635b8fa500fba2bef9189473
  3. https://leetcode.com/problems/reformat-phone-number/
  4. https://leetcode.com/problems/letter-combinations-of-a-phone-number/
  5. https://leetcode.com/problems/knight-dialer/ <- Bit slow.

Algorithms: Text Representation

Technique: Unicode Char Codes

// JavaScript
const c = str.charCodeAt(i);

// Unicode numbers codes 48-57 inclusive
if (c >= 48 && c <= 57) console.log("I'm a number character")
// Unicode uppercased letters codes 65-90 inclusive 
if (c >= 65 && c <= 90) console.log("I'm an uppercased character")
// Unicode lowercased letters codes 97-122 inclusive 
if (c >= 97 && c <= 122) console.log("I'm an lowercased character")
// JavaScript
const c = str.charCodeAt(i);

if ((c >= 65 && c <= 90) || (c >= 97 && c <= 122)) {
    const isUpperVowel = [65, 69, 73, 79, 85].indexOf(c) === -1 
    const isLowerVowel = [97, 101, 105, 111, 117].indexOf(c) === -1
    if (isUpperVowel && isLowerVowel) console.log("I'm a consonant")
    else console.log("I'm a vowel")
} else console.log("I'm not a letter")

Review: https://tutorial.eyehunts.com/js/get-the-unicode-value-of-character-javascript-example-code/

Technique: Character Maps

Alphabet maps:

// JavaScript
const a = {"a":0,"e":0,"i":0,"o":0,"u":0,"b":0,"c":0,"d":0,"f":0,"g":0,"h":0,"j":0,"k":0,"l":0,"m":0,"n":0,"p":0,"q":0,"r":0,"s":0,"t":0,"v":0,"w":0,"x":0,"y":0,"z":0}
const A = {"A":0,"E":0,"I":0,"O":0,"U":0,"B":0,"C":0,"D":0,"F":0,"G":0,"H":0,"J":0,"K":0,"L":0,"M":0,"N":0,"P":0,"Q":0,"R":0,"S":0,"T":0,"V":0,"W":0,"X":0,"Y":0,"Z":0}

// As array mapped by index
const arr_a = Object.keys(a)
const arr_A = Object.keys(A)

Consonant maps:

// JavaScript
const c = {"b":0,"c":0,"d":0,"f":0,"g":0,"h":0,"j":0,"k":0,"l":0,"m":0,"n":0,"p":0,"q":0,"r":0,"s":0,"t":0,"v":0,"w":0,"x":0,"y":0,"z":0}
const C = {"B":0,"C":0,"D":0,"F":0,"G":0,"H":0,"J":0,"K":0,"L":0,"M":0,"N":0,"P":0,"Q":0,"R":0,"S":0,"T":0,"V":0,"W":0,"X":0,"Y":0,"Z":0}

// As array mapped by index
const arr_c = Object.keys(c)
const arr_C = Object.keys(C)

Vowel maps:

// JavaScript
const v = {"a":0,"e":0,"i":0,"o":0,"u":0}
const V = {"A":0,"E":0,"I":0,"O":0,"U":0}

// As array mapped by index
const arr_v = Object.keys(v)
const arr_V = Object.keys(V)

Number maps:

// JavaScript
const N = {"0":0,"1":0,"2":0,"3":0,"4":0,"5":0,"6":0,"7":0,"8":0,"9":0}

// As array mapped by index
const arr_N = Object.keys(N)

Technique: Check Character is Number

// Java
Character.isDigit(inputStr.charAt(i)); // true/false

Algorithms: Letter Chars and Word Dictionaries

Specific to the dictionary, String, and Character checking with a fixed, finite, alphabet.

Solved Examples

  1. https://leetcode.com/problems/expressive-words/
  2. https://leetcode.com/problems/string-without-aaa-or-bbb/ <- Bottom 50% time-complexity.
  3. https://leetcode.com/problems/maximum-number-of-words-you-can-type/
  4. https://leetcode.com/problems/check-if-all-characters-have-equal-number-of-occurrences/ <- Bottom 50% time-complexity.
  5. https://leetcode.com/problems/check-if-word-equals-summation-of-two-words/
  6. https://leetcode.com/problems/long-pressed-name/
  7. https://leetcode.com/problems/shortest-word-distance/
  8. https://leetcode.com/problems/keyboard-row/
  9. https://leetcode.com/problems/verifying-an-alien-dictionary/ <- Bottom 50% time-complexity.
  10. https://leetcode.com/problems/single-row-keyboard/
  11. https://leetcode.com/problems/remove-vowels-from-a-string/ <- Bottom 50% time-complexity.

Algorithms: Word Patterns, Anagrams, and Palindromes

Word pattern, anagram, and palindrome scenarios.

Solved Examples

  1. https://leetcode.com/problems/word-pattern/
  2. https://leetcode.com/problems/palindrome-number/
  3. https://leetcode.com/problems/valid-palindrome/ <- Below 50% time-complexity. Top 2.62% space-complexity.
  4. https://leetcode.com/problems/palindrome-linked-list/
  5. https://leetcode.com/problems/valid-palindrome-ii/
  6. https://leetcode.com/problems/palindromic-substrings/ <- Below 50% time-complexity. Brute-force.
  7. https://leetcode.com/problems/partition-labels/ <- Below 50% time-complexity.
  8. https://leetcode.com/problems/find-anagram-mappings/
  9. https://leetcode.com/problems/group-anagrams/ <- Below 50% time-complexity. Top .26% space-complexity.
  10. https://leetcode.com/problems/minimum-number-of-steps-to-make-two-strings-anagram/

Algorithms: Parentheses

Calculate the validity of various Strings or substrings containing parentheticals.

Similar to facing but typically with a narrower focus (balance).

Solved Examples

  1. https://leetcode.com/problems/remove-outermost-parentheses/
  2. https://leetcode.com/problems/valid-parentheses/
  3. https://leetcode.com/problems/longest-valid-parentheses/ <- Very slow.
  4. https://leetcode.com/problems/minimum-add-to-make-parentheses-valid/
  5. https://leetcode.com/problems/maximum-nesting-depth-of-the-parentheses
  6. https://www.codewars.com/kata/5426d7a2c2c7784365000783
  7. https://www.codewars.com/kata/5277c8a221e209d3f6000b56

Algorithms: Two Pointer

Typically used when working with Arrays or Strings.

Use two pointers left and right.

Check or compare the values of each cycle.

Technique: Left and Right Pointers

while (left < right) { 
    //... 
    left++; 
    right--;
}

Solved Examples

  1. https://leetcode.com/problems/guess-number-higher-or-lower/
  2. https://leetcode.com/problems/missing-number-in-arithmetic-progression/
  3. https://leetcode.com/problems/minimize-maximum-pair-sum-in-array/ <- Very slow. Brute-force.
  4. https://leetcode.com/problems/split-two-strings-to-make-palindrome/
  5. https://leetcode.com/problems/merge-two-sorted-lists/

Algorithms: Sliding Window

Often used to find unique subarrays, longest substrings, longest Strings with some property, etc.

This method is often used to reduce the time complexity of a solution from Quadratic to Linear time (when it can be applied).

Generally:

Solved Examples

  1. https://leetcode.com/problems/maximum-average-subarray-i/
  2. https://leetcode.com/problems/longest-subarray-of-1s-after-deleting-one-element/
  3. https://leetcode.com/problems/distinct-numbers-in-each-subarray/ <- Very slow.
  4. https://leetcode.com/problems/longest-common-subsequence-between-sorted-arrays/

Algorithms: Direction and Facing

Solved Examples

  1. https://leetcode.com/problems/robot-bounded-in-circle/ <- Very slow.
  2. https://leetcode.com/problems/robot-return-to-origin/
  3. https://github.com/Thoughtscript/bloomfire_test
  4. https://www.codewars.com/kata/550f22f4d758534c1100025a

Algorithms: Number Representation

Non-number type number-representation scenarios.

Technique: Fast Number Reversal

Old way:

const rev = num => {
    const numStr = `${num}`
    let str = ''

    for (let i = numStr.length - 1; i >= 0; i--) {
        str += numStr[i]
    }

    return parseInt(str)
}

Much faster than int to String and back conversion:

// Java
int reverse(int num) {
    int rev = 0;

    while(num > 0){
        int d = num % 10;
        rev = rev * 10 + d;
        num = num / 10;
    }

    return rev;
}
// Java
const reverse = num => {
    let rev = 0

    while(Math.floor(num) > 0) {
        const d = num % 10
        result = Math.floor(result * 10 + d)
        num = num / 10
    }

    return rev
}

JavaScript doesn't automatically round Numbers < 1 to 0 like Java does. So, use Math.floor().

Solved Examples

  1. https://projecteuler.net/problem=89
  2. https://www.codewars.com/kata/51b66044bce5799a7f000003
  3. https://www.codewars.com/kata/5324945e2ece5e1f32000370
  4. https://www.codewars.com/kata/525f4206b73515bffb000b21
  5. https://www.codewars.com/kata/5265326f5fda8eb1160004c8
  6. https://www.codewars.com/kata/54d7660d2daf68c619000d95
  7. https://leetcode.com/problems/thousand-separator/
  8. https://leetcode.com/problems/roman-to-integer/ <- Below 50% time-complexity. Top .01% space-complexity.
  9. https://leetcode.com/problems/integer-to-roman/
  10. https://leetcode.com/problems/integer-to-english-words/
  11. https://leetcode.com/problems/string-to-integer-atoi/
  12. https://leetcode.com/problems/add-strings/
  13. https://leetcode.com/problems/add-two-numbers/
  14. https://leetcode.com/problems/add-two-numbers-ii/ <- Below 50% time-complexity. Top 2.73% space-complexity.
  15. https://leetcode.com/problems/sum-of-two-integers/
  16. https://leetcode.com/problems/excel-sheet-column-number/ <- Not original, bijective base 26.

Algorithms: Number Theory

Implementations of arithmetic or number-theoretic scenarios.

Solved Examples

  1. https://leetcode.com/problems/happy-number/
  2. https://leetcode.com/problems/self-dividing-numbers/
  3. https://leetcode.com/problems/valid-perfect-square/
  4. https://projecteuler.net/problem=46
  5. https://leetcode.com/problems/power-of-two/
  6. https://leetcode.com/problems/sum-of-two-integers/
  7. https://leetcode.com/problems/fraction-addition-and-subtraction/
  8. https://leetcode.com/problems/fibonacci-number/
  9. https://projecteuler.net/problem=14
  10. https://leetcode.com/problems/add-strings/
  11. https://leetcode.com/problems/prime-arrangements/
  12. https://leetcode.com/problems/missing-number-in-arithmetic-progression/
  13. https://projecteuler.net/problem=32

Algorithms: Prime Numbers

Algorithms involving Prime Numbers.

Solved Examples

  1. https://projecteuler.net/problem=7
  2. https://www.codewars.com/kata/54d512e62a5e54c96200019e
  3. https://www.codewars.com/kata/5262119038c0985a5b00029f
  4. https://projecteuler.net/problem=35
  5. https://projecteuler.net/problem=41
  6. https://projecteuler.net/problem=37
  7. https://leetcode.com/problems/count-primes/ <- Very slow.

Algorithms: Path Compression and Merging

Used to combine multiple subarrays until they are all non-overlapping.

Used to merge Sets until they are Disjoint Sets.

Solved Examples

  1. https://leetcode.com/problems/sum-of-digits-of-string-after-convert/ <- top 98.6% answer, loosely a compression/merging problem since it involves repeatedly summing a number until it’s below 10 or k.
  2. https://leetcode.com/problems/merge-intervals/
  3. https://www.codewars.com/kata/5286d92ec6b5a9045c000087
  4. https://leetcode.com/problems/partition-labels/
  5. https://leetcode.com/problems/meeting-rooms/
  6. https://leetcode.com/problems/meeting-rooms-ii/ <- Very slow.
  7. https://www.codewars.com/kata/52b7ed099cdc285c300001cd
  8. https://leetcode.com/problems/summary-ranges/

Algorithms: Directory or Name Traversal

Directory name, IP Address, versioning, or URL context path operations.

Solved Examples

  1. https://leetcode.com/problems/simplify-path/
  2. https://leetcode.com/problems/validate-ip-address/
  3. https://leetcode.com/problems/defanging-an-ip-address
  4. https://www.codewars.com/kata/5286d92ec6b5a9045c000087
  5. https://leetcode.com/problems/compare-version-numbers/

Algorithms: Level Recursion

Recursion specific to iterating down a pyramidal data structure.

Technique: Queue

Use a Queue (nxt), use two alternating containers: level and nxt.

    level = [node]
    nxt = []

    while len(level) > 0:
        temp = []
        count = 0
        l = len(level)

        while count < l:
            if (len(level) > 0):
                c = level.pop(0)
                if c is None:
                    continue
                temp.append(c.value)
                nxt.append(c.left)
                nxt.append(c.right)
            count += 1

        if len(temp) > 0:
            for x in range(0, len(temp)):
                result.append(temp[x])

        level = nxt

Solved Examples

  1. https://www.codewars.com/kata/52bef5e3588c56132c0003bc
  2. https://leetcode.com/problems/binary-tree-level-order-traversal/
  3. https://leetcode.com/problems/n-ary-tree-level-order-traversal/
  4. https://leetcode.com/problems/binary-tree-level-order-traversal-ii/

Algorithms: Geometry

Implementations of geometry scenarios and problems.

Solved Examples

  1. https://leetcode.com/problems/get-biggest-three-rhombus-sums-in-a-grid/ <- Bottom 50% time-complexity.
  2. https://leetcode.com/problems/rectangle-overlap/
  3. https://leetcode.com/problems/angle-between-hands-of-a-clock/
  4. https://leetcode.com/problems/k-closest-points-to-origin/
  5. https://leetcode.com/problems/subrectangle-queries/
  6. https://leetcode.com/problems/number-of-rectangles-that-can-form-the-largest-square/

Algorithms: Islands

Problems representing connected sub-matrices.

Technique: Check and Recurse

const countIslands = mapStr => {
  const ARR = mapStr.split("\n"), cleanedArr = []

  for (let i = 0; i < ARR.length; i++) {
    cleanedArr.push(ARR[i].split(""))
  }

  let deepCopy = [...cleanedArr], cnt = 0, hasNext = findNext(deepCopy)

  while (hasNext !== false) {
    cnt++
    recurse(deepCopy, hasNext[0], hasNext[1])
    hasNext = findNext(deepCopy)
  }

  return cnt
}

const findNext = arr => {
  for (let i = 0; i < arr.length; i++) {
    for (let j = 0; j < arr[i].length; j++) {
      const I = arr[i][j]
      if (I === '0') {
        return [i, j]
      }
    }
  }  
  return false
}

const recurse = (arr, i, j) => {
  if (i >= arr.length || j >= arr[i].length || i <= -1 || j <= -1) {}
  else {
    arr[i][j] = '.'
    if (arr[i-1] !== undefined && arr[i-1][j] === '0') recurse(arr, i-1, j)    
    if (arr[i+1] !== undefined && arr[i+1][j] === '0') recurse(arr, i+1, j)
    if (arr[i][j-1] !== undefined && arr[i][j-1] === '0') recurse(arr, i, j-1)    
    if (arr[i][j+1] !== undefined && arr[i][j+1] === '0') recurse(arr, i, j+1)
  }
}

Solved Examples

  1. https://www.codewars.com/kata/5611e038a1b7990def000076
  2. https://www.codewars.com/kata/55a4f1f67157d8cbe200007b
  3. https://leetcode.com/problems/island-perimeter/
  4. https://leetcode.com/problems/max-area-of-island/
  5. https://leetcode.com/problems/number-of-islands/
  6. https://leetcode.com/problems/flood-fill/
  7. https://leetcode.com/problems/minesweeper/

Algorithms: Pathing

To find a path from some origin to some end point.

Solved Examples

  1. https://leetcode.com/problems/minimum-path-sum/
  2. https://leetcode.com/problems/unique-paths/
  3. https://leetcode.com/problems/unique-paths-ii/
  4. https://projecteuler.net/problem=67
  5. https://projecteuler.net/problem=18

Algorithms: Rotations, Spirals, Diagonals

Includes spirals or diagonally traversing an N x M Array or matrix, transposing matrices, rotations, etc.

Also often involves mod or Modulus concepts.

Used for ciphers or rotating an Array.

Solved Examples

  1. https://leetcode.com/problems/rotate-string/
  2. https://leetcode.com/problems/search-in-rotated-sorted-array/
  3. https://leetcode.com/problems/find-minimum-in-rotated-sorted-array/
  4. https://leetcode.com/problems/rotate-array/
  5. https://codepen.io/thoughtscript/pen/poNQMaW
  6. https://leetcode.com/problems/spiral-matrix/
  7. https://leetcode.com/problems/matrix-diagonal-sum/
  8. https://leetcode.com/problems/sort-the-matrix-diagonally/
  9. https://leetcode.com/problems/diagonal-traverse/ <- Very slow.
  10. https://leetcode.com/problems/rotating-the-box/ <- Very slow.
  11. https://leetcode.com/problems/spiral-matrix-ii/
  12. https://www.codewars.com/kata/52fba2a9adcd10b34300094c

Algorithms: Coins and Make Bricks

Optimization scenarios involving fixed units of some value that need to be combined in some optimal way.

Solved Examples

  1. https://projecteuler.net/problem=31
  2. https://www.codewars.com/kata/564d0490e96393fc5c000029
  3. http://www.javaproblems.com/2013/11/java-logic-2-makebricks-codingbat.html
  4. https://leetcode.com/problems/coin-change-ii/ <- Not original.
  5. https://leetcode.com/problems/coin-change/ <- Not original. Slow.
  6. https://leetcode.com/problems/how-many-apples-can-you-put-into-the-basket/
  7. https://leetcode.com/problems/water-bottles/
  8. https://leetcode.com/problems/maximum-units-on-a-truck/ <- Below 50% time-complexity.

Algorithms: N-Sum

Triplet, 4-Sum, 3-Sum, and 2-Sum type problems.

Solved Examples

  1. https://leetcode.com/problems/two-sum/ <- Original solution was very slow but top 100% space-complexity.
  2. https://leetcode.com/problems/two-sum-less-than-k/
  3. https://leetcode.com/problems/two-sum-ii-input-array-is-sorted/

Algorithms: Mountains, Peaks, and Stock Markets

Find a highest point in a sequence and the next highest or lowest point depending.

Solved Examples

  1. https://leetcode.com/problems/longest-mountain-in-array/ <- Very slow.
  2. https://leetcode.com/problems/trapping-rain-water/
  3. https://leetcode.com/problems/peak-index-in-a-mountain-array/
  4. https://leetcode.com/problems/best-time-to-buy-and-sell-stock/ <- Not original.
  5. https://leetcode.com/problems/best-time-to-buy-and-sell-stock-ii/
  6. https://www.codewars.com/kata/597ef546ee48603f7a000057

Algorithms: Controlled Subsequences

Track things like substrings of a certain length, row or text justification/formatting, or repeated sub-patterns.

Use multiple pointers to track some value that gets reset every n-many loops.

Also includes "reverse loop in loop" (loop "backfilling") problems.

Solved Examples

  1. https://leetcode.com/problems/convert-1d-array-into-2d-array/
  2. https://leetcode.com/problems/string-without-aaa-or-bbb/
  3. https://leetcode.com/problems/text-justification/
  4. https://www.codewars.com/kata/537e18b6147aa838f600001b
  5. https://leetcode.com/problems/longest-subarray-of-1s-after-deleting-one-element/
  6. https://leetcode.com/problems/minimum-number-of-operations-to-move-all-balls-to-each-box/
  7. https://github.com/Thoughtscript/kin_insurance_js

Data Structures: Trees

A "tree-like", derived, Data Structure that has zero or more child instances, each instance taking a value:

  1. Trees are a kind of Graph which has a Root node.
  2. Each parent node has one or more child nodes.

Attributes

Implementations

var TreeNode = function(val, children = []) {
    this.val = val;
    this.children = children;
}
  1. Depth First Search - refer to the Search article.
    • Top to Bottom, Left to Right
    • By Column
  2. Breadth First Search - consult the Level Recursion article.
    • Left to Right, Top, to Bottom
    • By Row or Level
  3. N-Trees vs BST
    • For N-Trees replace Left and Right child nodes with Children List or Array.

Notes

  1. A LinkedList can be thought of as Tree with only one child element at each node.

Code samples:

  1. https://github.com/Thoughtscript/cplusplus-coursera/tree/master/course/11%20-%20Generic%20Tree

Data Structures: Binary Trees

A "tree-like", derived, Data Structure that connects left and right child instances, each instance taking a value.

Useful resource: https://trees-visualizer.netlify.app/trees

Binary Trees

  1. A Tree with at most two children.
  2. Ordered by constraints.
    • Binary Heap
      • A Binary Tree
      • Max: the value of any child of a parent node is greater than or equal to the parent.
      • Min: the value of any child of a parent node is less than or equal to the parent.
      • Average Read: O(n)
      • Average Add: O(1)
    • Binary Search
      • A Binary Tree
      • The value of the left child node of a parent node must be less than or equal to the value of the parent node.
        • L <= P
      • The value of the right child node of a parent node must be greater than or equal to the value of the parent node.
        • R >= P
      • Therefore, any value of any right node of the root node will be greater than the value of any left node.
      • Therefore, any value of any left node of the root node will be less than the value of any right node.
      • Average Read: O(log(n))
      • Average Add: O(log(n))
    • AVL Tree (Georgy Adelson-Velsky and Evgenii Landis)
      • A Self Balancing Binary Search Tree (dynamic)
      • Heights of each branch differ by at most 1.
      • A self-balancing algorithm executes if heights differ by more than 1.
  3. Perfect
    • Balanced and all leaves are at the same depth.
    • Each branch is complete, has two children, and is at the same height as the rest.
  4. Balanced
    • Each branch differs in height no more than one compared to any other in the tree.
  5. Complete
    • Each node except the last is completely filled (has two children).
    • Each node in the last row is as far left as possible.

Attributes

Implementations

var TreeNode = function(val, left, right) {
    this.val = val;
    this.left = left;
    this.right = right;
}
  1. Pre Order - current node first, left node, then right node last
  2. In Order - left node first, then current node, then right node last
  3. Post Order - left node first, right node second, then current node last
const preOrder = node => 
{
  const traverse = (node, result) => {
    if (!node) result.push(null)
    else {
      result.push(node.data)
      if (node.left) traverse(node.left, result)
      if (node.right) traverse(node.right, result)
    }
  }
  let result = []
  if (!node) return result
  traverse(node, result)
  return result
}

const inOrder = node => 
{
  const traverse = (node, result) => {
    if (node.left) traverse(node.left, result)
    result.push(node.data)
    if (node.right) traverse(node.right, result)
  }
  let result = []
  if (!node) return result
  traverse(node, result)
  return result
}

const postOrder = node => 
{
  const traverse = (node, result) => {
    if (!node) {}
    else {
      if (node.left) traverse(node.left, result)
      if (node.right) traverse(node.right, result)
      result.push(node.data)
    }
  }
  let result = []
  if (!node) return result
  traverse(node, result)
  return result
}

Refer to: https://www.codewars.com/kata/5268956c10342831a8000135

  1. Level Recursion - Traverse by each level of a Binary Search Tree
def tree_by_levels(node):
    result = []

    if node is None:
        return result

    level = [node]
    nxt = []

    while len(level) > 0:
        temp = []
        count = 0
        l = len(level)

        while count < l:
            if (len(level) > 0):
                c = level.pop(0)
                if c is None:
                    continue
                temp.append(c.value)
                nxt.append(c.left)
                nxt.append(c.right)
            count += 1

        if len(temp) > 0:
            for x in range(0, len(temp)):
                result.append(temp[x])
        level = nxt

    return result

Refer to: https://www.codewars.com/kata/52bef5e3588c56132c0003bc

Make a Balanced Binary Search Tree

Given a sorted Array or List:

const sortedArrayToBST = A => {
    const L = A.length
    if (L === 0) return null
    if (L === 1) return new TreeNode(A[0], null, null)
    if (L === 2) return new TreeNode(A[1], new TreeNode(A[0], null, null), null)
    if (L === 3) return new TreeNode(A[1], new TreeNode(A[0], null, null), new TreeNode(A[2], null, null))
    if (L > 3) return recurse(A)
}

const findPivot = arr => arr[Math.floor(arr.length / 2)]

const firstHalf = arr => arr.slice(0, Math.floor(arr.length / 2))

const secondHalf = arr => arr.slice(Math.floor(arr.length / 2) + 1, arr.length)

const recurse = A => {
    const L = A.length

    if (L === 0) return null
    if (L === 1) return new TreeNode(A[0], null, null)
    if (L === 2) return new TreeNode(A[1], new TreeNode(A[0], null, null), null)
    if (L === 3) return new TreeNode(A[1], new TreeNode(A[0], null, null), new TreeNode(A[2], null, null))
    if (L > 3) {
        let node = new TreeNode(findPivot(A), null, null)
        node.left = recurse(firstHalf(A))
        node.right = recurse(secondHalf(A))
        return node
    }
}
  1. https://trees-visualizer.netlify.app/trees
  2. https://www.codewars.com/kata/5268956c10342831a8000135
  3. https://www.codewars.com/kata/52bef5e3588c56132c0003bc
  4. https://leetcode.com/problems/convert-sorted-array-to-binary-search-tree

Code samples:

  1. https://github.com/Thoughtscript/cplusplus-coursera/tree/master/course/9%20-%20BST
  2. https://github.com/Thoughtscript/cplusplus-coursera/tree/master/course/10%20-%20AVL

Data Structures: Singly Linked List

A "chain-like", derived, Data Structure that connects instances head to tail through a next attribute, each instance taking a value.

Attributes

Time Complexity

Generally, the average time complexity for most operations will be O(N). One needs to traverse through a Linked List to find the right instances to alter, insert a new instance between, remove, etc.

The best case is O(1) - the desired instance can be right at the beginning of the Linked List at the head.

Implementations

// JavaScript
var LinkedList = function(val, next) {
    this.val = val;

    if (next === undefined || next === null) this.next = null;
    else this.next = next;
}

Here, I implement a very simple Linked List. Java's default LinkedList tracks head and tail via index. The implementation below explicitly stipulates a head and tail and uses Node objects (specifically the next property) to create a chain.

// Java
public class Node {
  private Node next;
  private Object data;

  public Node(Object data, Node next) {
    this.data = data;
    this.next = next;
  }

  //... Getters and Setters
}

public class LinkedList {
  private Node head;
  private Node tail;

  public LinkedList() {
    Node tail = new Node(null, null);
    this.head = new Node(null, tail);
    this.tail = tail;
  }

  public LinkedList(Node head, Node tail) {
    this.head = head;
    this.tail = tail;
  }
}

Common Operations

// JavaScript
function append(head, val) {
  if (head == null) head = new LinkedList(val, null);
  else {
    var current = head;
    while (current.next != null) {
      current = current.next;
    }
    current.next = new LinkedList(val, null);
  }
  return head;
};

function prepend(head, val) {
  if (head == null) head = new LinkedList(val, null);
  else {
    var current = head;
    head = new LinkedList(val, current);
  }
  return head;
};

/** Remove by value not by index. */
function del(head, val) {
  let current = head, arr = []

  while (current != null) {
    if (current.val != val) arr.push(current.val)
    current = current.next
  }

  return buildList(arr);
};

/** - Starting At 1 */
function insrt(head, val, position) {
  if (head == null) head = new LinkedList(val, null);
  else {
    var current = head;
    var last = head;
    for (var i = 1; i < position; i++) {
      last = current;
      current = current.next;
    }
    current = new LinkedList(val, current);
    last.next = current;
  }
  return head;
};

function checkCycle(head) {
  if (head == null) return false;
  else {
    var alreadyIn = [];
    var current = head;
    while (current != null) {
      if (alreadyIn.indexOf(current.val) == -1) alreadyIn.push(current.val);
      else return true;
      current = current.next;
    }
    return false;
  }
};

Notes

  1. A LinkedList can be thought of as Tree with only one child element at each node.

Data Structures: Stack

A "paper-stack" like derived object that's LIFO (last in, first out).

Often implemented with an Array or LinkedList.

Fun Algorithms

  1. Stacks are often useful when two potentially non-contiguous items must be paired and/or canceled out (such as (, ) pairs).
  2. In such cases, items can be added to a Stack (push), compared against the top of the Stack (peek), and removed (pop).

Consider Reverse Polish Logic Notation:

// Assumptions/Constraints: WFF, whitespaces correctly spaced
String[] A = "T T → T → T →".split(" ");

Stack<String> S = new Stack<>();

for (int i = 0; i < A.length; i++){
    String P = A[i];

    if (P.equals("∨")) {
        String X = S.pop();
        String Y = S.pop();
        if (X.equals("T") || Y.equals("T")) S.push("T");
        else S.push("F");

    } else if (P.equals("∧")) {
        String X = S.pop();
        String Y = S.pop();
        if (X.equals("T") && Y.equals("T")) S.push("T");
        else S.push("F");

    } else if (P.equals("→")) {
        String X = S.pop();
        String Y = S.pop();
        if (X.equals("T") && Y.equals("F")) S.push("F");
        else S.push("T");

    } else if (P.equals("¬")) {
        String X = S.pop();
        if (X.equals("T")) S.push("F");
        else S.push("T");

    } else S.push(P);
}

//S.pop()

T T → T → T → is equivalent to ((T → T) → T) → T in standard notation. The key insight here is that every operator is surrounded by a pair of Booleans.

Consider Reverse Polish Notation:

// Assumptions/Constraints: WFF, whitespaces correctly spaced
String[] A = "2 2 + 2 1 + * 2 +".split(" ");

Stack<String> S = new Stack<>();

for (int i = 0; i < A.length; i++){
    String X = A[i];

    if (X.equals("+")) S.push(S.pop() + S.pop());

    else if (X.equals("-")) {
        Double N = S.pop();
        Double M = S.pop();
        S.push(M - N);

    } else if (X.equals("/")) {
        Double N = S.pop();
        Double M = S.pop();
        S.push(M / N);

    } else if (X.equals("*")) S.push(S.pop() * S.pop());

    else S.push(Double.parseDouble(X));
}

//S.pop()

2 2 + 2 1 + * 2 + is equivalent to ((2 + 2) * (2 + 1)) + 2 in standard notation. The key insight here is agaub that every operator is surrounded by a pair of Numbers.

Also, Valid Parentheses:

String input = "({{{}[][][][][][][]}}()[])";

Stack<String> S = new Stack<>();

for (int i = 0; i < input.length(); i++){
    String P = String.valueOf(input.charAt(i));

    if (P.equals(")")) {
        if (S.size() > 0 && S.peek().equals("(")) S.pop();
        else S.push(P);

    } else if (P.equals("}")) {
        if (S.size() > 0 && S.peek().equals("{")) S.pop();
         else S.push(P);

    } else if (P.equals("]")) {
        if (S.size() > 0 && S.peek().equals("[")) S.pop();
        else S.push(P);

    } else S.push(P);
}

Every pair of Parentheses must pair and cancel each other out.

A Stack variant of Math Equation Solver (eval or the like not allowed):

// Assumptions/Constraints: WFF, parentheses valid
String inputString = "((((12/3)/2)-(-5*4))+10)";

Stack<Character> P_stack = new Stack<>();
Stack<Double> N_stack = new Stack<>();
Stack<Character> O_stack = new Stack<>();
String currentNum = "";

for (int i = 0; i < inputString.length(); i++) {
    Character C = inputString.charAt(i);

    if (C.equals(left)) P_stack.push(C);

    else if (C.equals(right)) {
        if (currentNum.length() > 0) {
            N_stack.push(Double.valueOf(currentNum));
            currentNum = "";
        }

        if (P_stack.peek().equals(left)) {
            P_stack.pop();

            if (N_stack.size() > 1) {
                Double X = N_stack.pop();
                Double Y = N_stack.pop();
                Character O = O_stack.pop();

                if (O.equals(add)) N_stack.push(X + Y);
                if (O.equals(multi)) N_stack.push(X * Y);
                if (O.equals(div)) N_stack.push(Y / X);
                if (O.equals(sub)) N_stack.push(Y - X);
            }
        }
    }

    else if (C.equals(add) || C.equals(sub) || C.equals(multi) || C.equals(div)) {
        if (C.equals(sub)) {
            Character L = inputString.charAt(i-1); // Will never be first number with correct parentheticals
            if (L.equals(add) || L.equals(sub) || L.equals(multi) || L.equals(div) || L.equals(left)) currentNum += "-";
            else {
                O_stack.push(C);

                if (currentNum.length() > 0) {
                    N_stack.push(Double.valueOf(currentNum));
                    currentNum = "";
                }
            }

        } else {
            O_stack.push(C);

            if (currentNum.length() > 0) {
                N_stack.push(Double.valueOf(currentNum));
                currentNum = "";
            }
        }
    }

    else currentNum += C;
}

//N_stack.pop()

Same intuitions as before but using alternating Stacks.

  1. https://leetcode.com/problems/min-stack/
  2. https://leetcode.com/problems/maximum-frequency-stack/
  3. https://leetcode.com/problems/implement-stack-using-queues/
  4. https://leetcode.com/problems/design-a-stack-with-increment-operation/
  5. https://leetcode.com/problems/build-an-array-with-stack-operations/

Code samples:

  1. https://github.com/Thoughtscript/java_algos/blob/main/src/main/java/io/thoughtscript/algos/ReversePolishNotationLogic.java
  2. https://github.com/Thoughtscript/java_algos/blob/main/src/main/java/io/thoughtscript/algos/ReversePolishNotation.java
  3. https://github.com/Thoughtscript/java_algos/blob/main/src/main/java/io/thoughtscript/algos/ValidParentheses.java
  4. https://github.com/Thoughtscript/java_algos/blob/main/src/main/java/io/thoughtscript/algos/MathEquationSolverStack.java

Data Structures: Queue

A "waiting line" (British "queue") and derived object that's FIFO (first in, first out).

Often implemented with an Array or LinkedList.

  1. https://www.codewars.com/kata/52a64cf14009fd59c6000994
  2. https://leetcode.com/problems/queue-reconstruction-by-height/
  3. https://leetcode.com/problems/implement-stack-using-queues/

Data Structures: Sets

Most languages provide some implementation (or approximation) of the (the Pure Mathematics) Set Theoretic conception.

In Pure Mathematics, Sets are defined by the following:

Programmatically, Sets are generally characterized by the following (slightly weaker) constraints:

Refer to: Algorithmic Implementations of Common Set Operations

There are some important differences between the above and the pure math conception:

Note that Apache's Java Common Collection Utils supplies both a customized retainAll() and intersection() methods that return distinct Sets.

  1. https://docs.oracle.com/javase/8/docs/api/java/util/List.html#retainAll-java.util.Collection-
  2. https://www.baeldung.com/apache-commons-collection-utils

Data Structures: Useful

Some interesting Data Structures of note (that are typically variants of the above):

  1. Priority Queue - a Queue that sorts elements by some specified Comparison or Priority.
    • Define a Comparator to sort elements.
    • Exposes Queue functionalities: add() - Heap Sorted, and in O(logN) time, contains(), peek() - returns first element without removing it, pool() - removes and returns first element, etc.
    • Much faster than either (a) scratch bulding a Data Structure that's required to both store and sort each element or to (b) naively sort some existing Data Structure on each pass through a loop.
  2. Tree Map - a Map that supports sorted Key Value pairs.
    • Define a Comparator to sort elements.
    • Exposes Map functionalities: put() - Heap Sorted, and in O(logN) time, containsKey(), get(), keySet(), etc.
  1. https://geeksforgeeks.org/priority-queue-set-1-introduction/
  2. https://www.geeksforgeeks.org/treemap-in-java/

Computer Science: Arguments and Parameters

Often used interchangeably (and often confused).

Parameters are used in the definition of a function or method signature. They are the range of values and types that a function can take.

Arguments are the values that a function takes when being called or invoked.

Original Source of Confusion

From: https://chortle.ccsu.edu/java5/Notes/chap34A/ch34A_3.html - I think the point of confusion arises from the following original terminology:

Computer Science: Transient Objects

It's often useful to have "temporary", in-memory objects, that aren't persisted or saved to a database.

Such Transient Objects can be Fields, used in Serializing/Deserializing, validation, etc.

Transient Fields

// Java
@Entity
public class Person {

    @Transient
    private String temporaryNote;

}

Note that in Java, the @Transient annotation and keyword transient accomplish much of the same. Fields and their values are ignored and/or replaced with another value.

If one both implements Serializable and uses the transient keyword in Java, default values are created, persisted, and the original values are stored in a separate file. (This was often used for PII since it separates sensitive data into two parts that have to be reassembled.)

// Java
public class Employee implements Serializable {
    private static final long serialVersionUID = 1L;
    private transient Address address;

    private void writeObject(ObjectOutputStream oos) 
      throws IOException {
        oos.defaultWriteObject();
        oos.writeObject(address.getHouseNumber());
    }

    private void readObject(ObjectInputStream ois) 
      throws ClassNotFoundException, IOException {
        ois.defaultReadObject();
        Integer houseNumber = (Integer) ois.readObject();
        //...
        this.setAddress(a);
    }

    //...
}
# Ruby
class MyModel < ActiveRecord::Base
  attribute :small_int, :integer, limit: 2
end

Data Transfer Objects

Plain Old Java Objects or some other intermediary object (say in Ruby) can be used to then Get and pass data into the relevant domain entity (Hibernate or ActiveRecord, above).

  1. https://api.rubyonrails.org/classes/ActiveRecord/Attributes/ClassMethods.html
  2. https://www.baeldung.com/jpa-transient-ignore-field
  3. https://www.baeldung.com/java-serialization

Computer Science: Pointers

Variables are typically a compound with a name, a declaration keyword, some value setting symbol, and a value.

int myVar = 1;

Pointers point to a Variable’s address / reference in memory.

One can think of a Pointer as a reference to a value in memory or as a memory address.

int *numAddress = & myVar;

Dereferencing the Pointer / address to get the value back.

int derefNum = *numAddress;
myVar == derefNum; // True

Computer Science: Processes and Threads

A Process typically corresponds to a Program. (Many programs use multiple Processes that intercommunicate through Inter-Process Communication (IPC) - Electron.js.)

Processes are often run in an unmanaged way ("fire and forget", running silently in the background).

A Process usually has multiple Threads.

Threads have their own Stack and don't necessarily Synchronize their activities. Threads are often managed by the same Program (running, composing, starting, stopping, execution, intercommunication). They can interact in somewhat unpredictable ways, in total isolation, or be Synchronized using:

To illustrate this relationship further:

One finer point that’s sometimes forgotten: the Node engine is Single-Threaded and so, one Process is one Thread (under normal circumstances). So, Exec fork, spawn, and Child exec will all create new Single-Threaded Processes.

Computer Science: Heap vs Stack

Objects stored in Heap memory are persisted across all Threads.

Objects stored in Stack memory are available through the execution of a method or function (typically the scope of a function).

It's convenient to think of a Stack as a "local" Heap.

Objects can be stored:

In C++, Stack memory is automatically managed by the Garbage Collector, and Heap memory requires explicit calls (to say persist something in memory across the lifespan of a specific function).

Computer Science: Hexadecimal Numbers

Fully general algorithm.

// JavaScript
// 0-9
// A-F represent 10-15
const NUMS = ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "A", "B", "C", "D", "E", "F"];

const decimalToHexadecimal = (num, remainders = []) => {
  if (num < 0) {
    // Specific to num < 0
    // A trick to convert negative decimals to hex
    num = num + Math.pow(2, 32)
  }

  const r = num % 16
  const d = Math.floor(num / 16)

  remainders.push(NUMS[r])

  if (d < 1) return remainders.reverse().join("")
  return decimalToHexadecimal(d, remainders)
}

const hexToDecimal = hex => {
  let num = 0 
  // Any hexadecimal that begins with 8-F is negative
  if (NUMS.indexOf(hex.charAt(0)) >= 8) num -= Math.pow(2, 32)
  let order = hex.length

  for (let i = 0; i < hex.length; i++) {
    order--;
    var n = NUMS.indexOf(hex.charAt(i));
    num += n * Math.pow(16, order);
  }

  return num;
}

Int to Hex

Int to Hex.

Hex to Binary in JS:

// JavaScript
var hexToBinaryMap = {
    "0": "0000",
    "1": "0001",
    "2": "0010",
    "3": "0011",
    "4": "0100",
    "5": "0101",
    "6": "0110",
    "7": "0111",
    "8": "1000",
    "9": "1001",
    "a": "1010",
    "b": "1011",
    "c": "1100",
    "d": "1101",
    "e": "1110",
    "f": "1111"
}

function intToHex(num) {
    var hex = parseInt(num, 10).toString(16);
    if (hex == "0") hex = "0000";
    return hex;
}

function hexTo16BitBinary(hexString) {
    var l = hexString.length, innerString = "", result = "";
    for (var i = 0; i < l; i++) {
        innerString += hexToBinaryMap[hexString.charAt(i)];
    }
    return innerString;
}

Computer Science: Signed and Unsigned Numbers

Signed numbers can be positive, negative, or zero (The Naturals, full Reals).

Unsigned can’t (Positive Ordinals, etc.).

Important Terminology

  1. Radix – number of unique digits used to represent a number

    • 16 bit binarybase 2 (radix 2) since it uses 2 numbers in the base.
    • Hexadecimal is radix 16 since it uses base 16
  2. Base – the number system (2, 10, decimal, etc.).

    • Refer to Log notes above.
    • Implicitly base 10 or 2 otherwise.
  3. Mantissa – Two meanings:

    • The Significand (IEEE 754 double-precision 52-bit or 53-bit with hidden bit Significand) and the numbers following a decimal (x.yyyyyyy).
    • The first sense defines the number of bits that are used to represent the significant numbers (including a sign or not) that are multiple against some power value (e.g. – [the binary representation of] 12345 x 10-5).
    • The second sense expresses the intent of the first operation.

Computer Science: Bitwise Operators

Note that Java will truncate the leading leftmost 0.

  1. https://www.baeldung.com/java-bitwise-operators
  2. https://codegym.cc/groups/posts/10-bitwise-operators

Computer Science: Object Oriented Design Principles

General concepts and principles of Object Oriented Design (OOD).

Types and Classes

  1. The term Type originates from a long line of math, logic, and philosophy that culminated in ZFC Set Theory and Rammified Type Theory in the early 20th century.
  2. ZFC Set Theory formalizes the prior notion of a Class as a specific kind of well-ordered and -behaving Set (fixing the bugs in Fregean, Cantorian, and Russellian systems).
  3. Rammified Type Theory formalizes the prior notion of a Class as a hierarchy of Types.
  4. Classes are abstract patterns or templates that are instantiated into or as particular Objects (per Plato).

Modern programming languages often include a mixture of Classes, Types, and/or Sets.

Below, Class and Type will mostly be used interchangeably (and while these align with the mathematical concept of a Set, Set will be strictly reserved the Data Structure Type).

OOD Principles

  1. Encapsulation:
    • Boundaries exist (Getters and Setters, Dot Notation, field access, Ruby's @, attr_accessor) between Classes and particular Objects.
    • Visibility (Java's public, protected, package, and private) can be controlled.
  2. Aggregation:
    • Class definitions can be nested.
    • Classes can exist as Inner or Outer Classes.
  3. Inheritance:
    • Classes exist in hierarchies and features of top-level Classes are present in their Descendants.
    • Generics in Java.
    • Multiple Inheritance and Multiple Inheritance Conflict Resolution in Python.
  4. Polymorphism:
    • A Class can implement multiple Interfaces (Java).
    • Generally, a Class needn't be singly-Typed (can be Typed in multiple ways without necessarily requires Multiple Inheritance).
  5. Abstraction:
    • Where a Class is a Parent of another.
    • Where some Superclass is used to reason about or define permissible Subclasses.
    • A Class can implement an Interface (which defines Methods only up to their Signature) and must have the stipulated Functions or Methods present (Java and GoLang's Interfaces).
    • A Class can be Abstract which requires it to be Subclassed to be Instantiated (if the language supports such a concept).
    • In Java, an Abstract Class can define Fields with initialized Values and fully defined Methods.

SOLID

  1. Service Responsibility Principle:
    • Class definitions and Types should be scoped to specific functionality or role.
    • Even with good OOD, one might be tempted to import a single Class everywhere (or at least in multiple places).
    • Despite say proper Visibility and Encapsulation controls, developers might then misuse functionalities or import unintended Variables or Methods.
    • This aligns with proper Separation of Concerns as a guiding principle. Resources should be defined only up to and isolated by their intended functionality (rather than sloppily blurring or intermixing intentions, intended uses, or meanings).
  2. Open and Closed:
    • Types should be extensible (extendable, subclassable).
    • But Subclasses shouldn't modify their Parent or Super Classes.
    • Accessing Properties, the Constructor, Fields, and/or Methods of a Parent requires some explicit and verbose keyword or operation (super, super()). Even still in doing so, the Ancestor definition does not alter the Superclass definition (itself).
  3. Liskov Substitution Principle:
    • If P is a property of a Type A and B is a Subtype of A, then P is a property of B.
    • Properties, Methods, and Fields of a Parent will be automatically inherited by (implicitly or explicitly present in the defintion of) their Subtypes.
  4. Interface Segmentation:
    • Types shouldn't unnecessarily implement (or be an implementation of an unnecessary) Interfaces.
    • Types also shouldn't unnecessarily inherit from unneeded Superclasses.
  5. Dependency Inversion Principle:
    • Unidrectional Top to Bottom dependency chain.
    • More abstract Types don't depend on their less abstract Subtypes.
    • Liskov Substitution could be a symmetric principle. In tandem with the Dependency Inversion Principle, inheritance of Properties, Methods, and so on become one directional.

Java

Consult this article for a discussion on Java-specific OOD principles, concepts, and code examples.

Go

Consult this article for a discussion on GoLang-specific OOD principles, concepts, and code examples.

Ruby

Consult this article for a discussion on Ruby-specific OOD principles, concepts, and code examples.

Python

Consult this article for a discussion on Python-specific OOD principles, concepts, and code examples.

  1. https://www.digitalocean.com/community/conceptual-articles/s-o-l-i-d-the-first-five-principles-of-object-oriented-design

C++: General Concepts

A compiled, Object Oriented, statically typed programming language that predates most others. Some quirks:

  1. C++ programs will execute even if they don't compile!
  2. C++ supports Pointers and Dereferencing.
  3. C++ divides files into Class Definitions (suffixed .h files) and their implementations (suffixed .cpp files).
  4. C++ often requires more explicit use of Garbage Collecting and memory management (especially w.r.t. the Heap).

Runtime Environment

Use gcc through Xcode (if on a Mac) and cmake for compilation and executation of C++ code:

$ gcc --version
Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/c++/4.2.1
Apple clang version 11.0.0 (clang-1100.0.33.16)
Target: x86_64-apple-darwin19.4.0
Thread model: posix
InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
$ cmake --version
cmake version 3.16.6

CMake suite maintained and supported by Kitware (kitware.com/cmake).

Console Out

cout << “hello world”;

Note that the << operator is the concatenation operator (. or + in other languages) it's outputs data to the console. >> receives data (from say cin).

Code samples:

  1. https://github.com/Thoughtscript/cplusplus-coursera/tree/master/examples

C++: Files and OOD

There are no Interfaces or Abstract Classes in C++. However:

  1. filename.h a Class Definition that’s pre-compiled (since it's not likely to change).
  2. filename.cpp the implementation of that Class Definition (with full method definitions/implementations).
  3. :: is the scope resolution operator and defines the implemented methods of the class definition.
  4. : is the subclassing operator (equivalent of the extends keyword in Java).

Constructors

C++ Constructors come in a few flavors:

  1. Default Constructors - C++ supported default constructors that are automatically supplied for each Class. It's convenient to think of these as akin to the @NoArgs, @AllArgs annotations in Spring.
  2. Customized Constructors - developer-supplied and defined constructors.
  3. Copy Constructors - used internally or explicitly invoked to initialize a new Object with the values of another member of the same Class.

Refer to: https://en.cppreference.com/w/cpp/language/default_constructor, https://en.cppreference.com/w/cpp/language/copy_constructor

For example, given ExampleClass.h, ExampleClass.cpp, and main.cpp:

ExampleClass.h:

// header file - think interface/abstract class skeleton - not likely to change and doesn't need to be recompiled
#ifndef Example_Class_H // pragma guard - used at compile time to prevent redundant appending/prepending of compiled code
#define Example_Class_H // if not already defined, add - otherwise it will be ignored

namespace ExampleNamespace {
    class ExampleClass {
        public:
            // even explicitly specifying constructor here defines a custom
            ExampleClass();
            ExampleClass(int _a, int _b);
            int a, b;
            void exampleMethod();
    };
}

#endif

ExampleClass.cpp:

// example_class implementation
#include <iostream> // header in standard library
#include "ExampleClass.h" // header in custom library

using namespace ExampleNamespace; // now you don't have to prepend the namespace when calling methods, etc.
using namespace std;

// implement class method without class syntax here
void ExampleNamespace::ExampleClass::exampleMethod() {
    a = 5;
    b = 100;
    std::cout << "I'm a console out message from ExampleClass exampleMethod()" << endl;
}

// custom default constructor
ExampleNamespace::ExampleClass::ExampleClass() {
    a = 100;
    b = 100;
}

// custom constructor
ExampleNamespace::ExampleClass::ExampleClass(int _a, int _b) {
    a = _a;
    b = _b;
}

main.cpp:

#include <iostream>
#include "ExampleClass.h"

using namespace ExampleNamespace;
using namespace std;

int main() {
    try {
        ExampleClass ec; // custom default constructor
        std::cout << &ec << " with " << ec.a << " " << ec.b << std::endl;

        ExampleClass ecc(1,1); // custom constructor
        std::cout << &ecc << " with " << ecc.a << " " << ecc.b << std::endl;

        ExampleClass * eccc = new ExampleClass; // pointer with custom default constructor
        std::cout << eccc << " with " << (*eccc).a << " " << (*eccc).b << std::endl;
        delete eccc;

        ExampleClass * ecccc = new ExampleClass(111,111); // pointer with custom constructor
        std::cout << ecccc << " with " << (*ecccc).a << " " << (*ecccc).b << std::endl;
        delete ecccc;

        // copies - keeps different addresses
        ExampleClass copyExample;
        copyExample.a = 1000;
        copyExample.b = 1000;

        ExampleClass otherCopyExample;
        otherCopyExample.a = 1111;
        otherCopyExample.b = 1111;

        std::cout << copyExample.a << " " << copyExample.b  << " at: " << &copyExample << " " << otherCopyExample.a << " " << otherCopyExample.b << " at: " << &otherCopyExample << std::endl;
        copyExample = otherCopyExample;
        std::cout << copyExample.a << " " << copyExample.b  << " at: " << &copyExample << " " << otherCopyExample.a << " " << otherCopyExample.b << " at: " << &otherCopyExample << std::endl;

        // memory assignment - note how the memory addresses for the variables remain distinct despite assigning their pointers to each other.
        ExampleClass x;
        x.a = 111;
        x.b = 111;
        ExampleClass * xx = &x;

        ExampleClass y;
        y.a = 1000;
        y.b = 1000;
        ExampleClass * yy = &y;

        std::cout << x.a << " " << x.b  << " at: " << &x << " " << xx << " " << y.a << " " << y.b << " at: " << &y << " " << yy << std::endl;
        xx = yy;
        std::cout << x.a << " " << x.b  << " at: " << &x << " " << xx << " " << y.a << " " << y.b << " at: " << &y << " " << yy << std::endl;
        std::cout << (*xx).a << " " << (*xx).b << " at: " << xx << " " << (*yy).a << " " << (*yy).b << " at: " << yy << std::endl;

        // references
        ExampleClass refCopyExample;
        refCopyExample.a = 1000;
        refCopyExample.b = 1000;

        ExampleClass & otherRefCopyExample = refCopyExample;

        std::cout << refCopyExample.a << " " << refCopyExample.b  << " at: " << &refCopyExample << " " << otherRefCopyExample.a << " " << otherRefCopyExample.b << " at: " << &otherRefCopyExample << std::endl;


    } catch (const std::exception &e) {
        std::cout << e.what() << std::endl;
    }

    return 0;
}

Class Object Assignments

Remember that objects created in the Stack do not automatically persist in the Heap. One illuminating topic is how objects in two distinct functions can be assigned.

ExampleClass methodOne() {  
    ExampleClass ec;
    return ec;  
}

int main() {  
    ExampleClass ecc;
    ecc = methodOne();  
}

Will the above throw an error? No, the following process is performed:

  1. Default Constructor called in both methodOne() (for ec) and main() (for ecc).
  2. Copy Constructor is called when returning methodOne() (this link two discrete events on the Stack).
  3. Assignment Operator is called to copy the "innards" from ec to ecc (values are copied from ec to ecc).

Access modifiers

Can be declared in classes (for encapsulated class fields) using the following:

// header file - think interface/abstract class skeleton - not likely to change and doesn't need to be recompiled
#ifndef Example_Class_One_H // pragma guard - used at compile time to prevent redundant appending/prepending of compiled code
#define Example_Class_One_H // if not already defined, add - otherwise it will be ignored

namespace ExampleNamespace {
    class ExampleClassOne {
        public:
            int num;
            void exampleMethodOne();
    };
}

#endif

Class Methods

Example One

ExampleClassOne.h:

// header file - think interface/abstract class skeleton - not likely to change and doesn't need to be recompiled
#ifndef Example_Class_One_H // pragma guard - used at compile time to prevent redundant appending/prepending of compiled code
#define Example_Class_One_H // if not already defined, add - otherwise it will be ignored

namespace ExampleNamespace {
    class ExampleClassOne {
        public:
            int num;
            void exampleMethodOne();
    };
}

#endif

main.cpp:

// example_class implementation
#include <iostream> // header in standard library
#include "ExampleClassOne.h" // header in custom library

using namespace ExampleNamespace; // now you don't have to prepend the namespace when calling methods, etc.
using namespace std;

// implement class method without class syntax here
void ExampleClassOne::exampleMethodOne() {
    cout << "I'm a console out message from ExampleClassOne exampleMethodOne()" << endl;
}

// executable code (must be wrapped in main method)
int main() {
    try {
        ExampleClassOne exampleOne;
        cout << "Review the random number assigned here: " << exampleOne.num << endl;
        exampleOne.num = 2;
        exampleOne.exampleMethodOne();
        cout << exampleOne.num << endl;

    } catch (const std::exception &e) {
        std::cout << e.what() << std::endl;
    }

    return 0;
}

Refer to: https://github.com/Thoughtscript/cplusplus-coursera/tree/master/examples/6%20-%20class

Example Two

add.h:

#ifndef Example_Class_H
#define Example_Class_H

int add(int x, int y)
{
    return x + y;
}

#endif

main.cpp:

#include <iostream>
#include "add.h"

int main() {
    try {
        std::cout << "The sum of 3 and 4 is " << add(3, 4) << '\n';
    } catch (const std::exception &e) {
        std::cout << e.what() << std::endl;
    }
    return 0;
}

Refer to: https://github.com/Thoughtscript/cplusplus-coursera/tree/master/examples/3%20-%20dependency

Inheritance

Given a pair of Class Definitions and their implementations: BaseClass and SuperClass.

SuperClass.h:

#include <iostream>
#include "SuperClass.h"

using namespace ExampleNamespace;
using namespace std;

void SuperClass::superClassMethod() {
    num = 500;
    std::cout << "superClassMethod() called in SuperClass " << num << std::endl;
}

SuperClass.cpp:

#ifndef SUPER_CLASS_H
#define SUPER_CLASS_H

namespace ExampleNamespace {
    class SuperClass {
    public:
        int num;
        virtual void superClassMethod();
    };
}

#endif

BaseClass.h:

#ifndef BASE_CLASS_H
#define BASE_CLASS_H

#include "SuperClass.h"

// Specify an associated Namespace - this is akin to a package in Java
namespace ExampleNamespace {
    class BaseClass: virtual public SuperClass {
        public:
            int num;
            void baseClassMethod();
            // Note that this may throw a warning - it can be ignored
            // warning: 'override' keyword is a C++11 extension [-Wc++11-extensions]
            void superClassMethod() override;
            void superEquivalentMethod();
    };
}

#endif

BaseClass.cpp:

#include <iostream>
#include "BaseClass.h"

using namespace ExampleNamespace;
using namespace std;

void BaseClass::baseClassMethod() {
    num = 3;
    std::cout << "baseClassMethod() " << num << std::endl;
}

void BaseClass::superClassMethod() {
    num = 7;
    std::cout << "superClassMethod() override " << num << std::endl;
}

void BaseClass::superEquivalentMethod() {
    SuperClass::superClassMethod();
    std::cout << "superEquivalentMethod() called in BaseClass " << num << std::endl;
}

main.cpp:

// simple classless executable with function.

#include <iostream>
#include "BaseClass.h"

using namespace ExampleNamespace;

int main() {
    try {
        ExampleNamespace::BaseClass bc;
        bc.baseClassMethod();
        bc.superClassMethod();
        bc.superEquivalentMethod();
    } catch (const std::exception &e) {
        std::cout << e.what() << std::endl;
    }
    return 0;
}

Refer to: https://github.com/Thoughtscript/cplusplus-coursera/tree/master/examples/8%20-%20inheritance

  1. https://en.cppreference.com/w/cpp/language/default_constructor
  2. https://en.cppreference.com/w/cpp/language/copy_constructor

Code samples:

  1. https://github.com/Thoughtscript/cplusplus-coursera/tree/master/examples/7%20-%20constructors
  2. https://github.com/Thoughtscript/cplusplus-coursera/tree/master/examples/6%20-%20class
  3. https://github.com/Thoughtscript/cplusplus-coursera/tree/master/examples/3%20-%20dependency
  4. https://github.com/Thoughtscript/cplusplus-coursera/tree/master/examples/8%20-%20inheritance

C++: Memory

Generally, C++ requires and offers more precise control over memory use in terms of Garbage Collection, variable declaration, and memory addressing.

Garbage Collection

Pointers and References

Consider:

int num = 0;
&num //address of value of num
int* pointer_num = &num; //address of value of num
*pointer_num; //0 

Tip: think of ** as being the same as no * (the two cancel out) - akin to double-negation elimination.

Example

#include <iostream>

int main()
{
    try {
        // ------------------- variable with value -------------------
        int num = 100;
        // note using std::cout explicitly here instead of the using keyword at top of file
        std::cout << "num " << num << std::endl;

        // ------------------- pointer variable with a reference to the address of the variable above  -------------------
        int *numAddress = &num;
        std::cout << "numAddress " << numAddress << std::endl;

        // ------------------- dereference the address to get the value back -------------------
        int derefNum = *numAddress;
        std::cout << "derefNum " << derefNum << std::endl;
        *numAddress = 42;
        std::cout << "numAddress " << numAddress << std::endl;
        std::cout << "*numAddress " << *numAddress << std::endl;

        // ------------------- Reference variables -------------------
        int & refVar = derefNum;
        std::cout << "refVar to derefNum " << refVar << std::endl;

        // ------------------- heap example #1 -------------------
        // declare a pointer variable using new keyword - which automatically (always) assigns memory to the heap
        int * exampleA = new int;
        std::cout << "Initialized to last value on heap: " << exampleA << " " << *exampleA << std::endl;
        delete exampleA;

        // -------------------  heap example #2  -------------------
        // declare a pointer variable and allocate a memory address in heap
        int * heapVariable = (int*) malloc(1);
        // assign a value to the pointer variable that doesn't exceed the specified size
        heapVariable[0] = 45;
        std::cout << "Heap assigned value " << heapVariable[0] << std::endl;
        std::cout << "Heap pointer variable / address " << heapVariable << std::endl;
        // return the allocated memory block back to the heap
        free(heapVariable);

        // ------------------- heap example #3 -------------------
        // declare a pointer variable using new keyword - which automatically assigns memory to the heap
        int * newVar = new int;
        std::cout << "Note the value initialized to is " << newVar << " " << *newVar << std::endl;
        *newVar = 1000;
        std::cout << "newVar " << *newVar << " at " << newVar << std::endl;
        // declare a pointer variable assigning NULL
        int * nullVar = NULL;
        // use delete keyword only for variables declared with new keyword or NULL
        delete newVar;
        delete nullVar;

        // ------------------- null pointer versus NULL -------------------
        // NULL is a value that can be assigned to a variable
        // null pointer keyword specifies a null address - technically, 0x0
        int * pointerVar = nullptr;
        // Cannot access nor delete - will throw error or exit code 11 if you attempt either

    } catch (const std::exception& e) {
        std::cout << e.what() << std::endl;
    }

    // main() must always return an exit code
    return 0;

}

C++: Template Functions

C++ supports Template Functions which are akin to using Java Generics in Method definitions.

Example

ExampleClass.h:

#include <iostream>

#ifndef EXAMPLE_CLASS_H
#define EXAMPLE_CLASS_H

using namespace std;

namespace ExampleNamespace {

    // only one type need be flexibly declared
    template<typename T> class ExampleClass {
    public:
        int num;
        T flexibleVar;

        // best to define these in the same class with template<typename T> declaration
        void exampleMethodOne() {
            cout << "exampleMethodOne() " << flexibleVar << " " << typeid(flexibleVar).name() << endl;
        }

        T flexibleMethodOne(T a) {
            cout << "flexibleMethodOne() " << a << " " << typeid(a).name() << endl;
            return a;
        }

        T flexibleMethodTwo(T a, T b) {
            T result = a + b;
            cout << "flexibleMethodTwo() " << result << " " << typeid(result).name() << endl;
            return a + b;
        }
    };
}

#endif

main.cpp:

#include <iostream>
#include "ExampleClass.h"

using namespace std;
using namespace ExampleNamespace;

// Within the definition of a Function
template<typename V>
V standaloneExampleMethod(V x) {
    return x;
}

int main() {
    try {

        ExampleClass<string> ec;
        string random = "I am a random string";
        ec.flexibleVar = random;
        ec.num = 0;

        ec.exampleMethodOne();
        ec.flexibleMethodOne("text");
        ec.flexibleMethodTwo("hello","world");
        std::cout << "My values are " << ec.flexibleVar << " " << ec.num  << '\n';

        ExampleClass<int> ecc;
        ecc.flexibleVar = 5;
        ecc.num = 100;
        ecc.flexibleMethodTwo(1,2);
        std::cout << "My values are " << ecc.flexibleVar << " " << ecc.num  << '\n';

        string a = standaloneExampleMethod("hello");
        int b = standaloneExampleMethod(5);
        std::cout << "Flexible template values " << a << " " << b  << '\n';

    } catch (const std::exception &e) {
        std::cout << e.what() << std::endl;
    }

    return 0;
}

Ruby: General Concepts

  1. nil is the nullish value keyword.
  2. Ruby is an inherently Synchronous language. There's nothing that corresponds to a native Promise.
  3. Modules are Mixins that can be included in a Class. This also provides Java-like Interface and Abstract Class reuse.
  4. All Function, Method, Procs, and Lambda-types are Closures.

Ruby: Closures

All Function, Method, Procs, and Lambda-types are Closures.

Examples

# More Closures
## Closures include any of procs, lambdas, methods, functions, blocks

#-----------------------------------#

# Procs (functions)
## Lambda proc with no variable name
-> (arg) {p arg +1 }.call(2)

## Proc new keyword with no assignment
Proc.new {| n | p n + 111}.call(2)
## Proc new keyword with assignment
proc1 = Proc.new {| n | n ** 2}
p proc1.call(4)

## Proc from block
def make_proc(&block)
   block
end
proc2 = make_proc {|x| x+7 }
p proc2.call(4)

#-----------------------------------#

# Lambdas
## With variable name
varName = -> (arg){p arg + 1}
varName.call(2)

#-----------------------------------#

# Blocks
## Can be a method
def example1
  yield
end
example1{p 2+ 4}

def example2(&block)
  block.call
end
example2{p 5+ 8}

## Helpful for arrays
[1,2,3,4].each do |x|
  p x
end
[1,2,3,4].each{|x| p x}

Lambda Proc with no variable name (Anonymous Function called with an actual parameter or argument):

-> (arg) {p arg +1 }.call(2)

Methods

# Methods
## Last line is automatically returned
## No explicit return type needed

def scoring(x, y)
  x + y
end

## It can be added however

def add(x, y)
  return x + y
end

puts(scoring(1,2))
puts(add(1,4))

## Parameterization - key arguments

def order_irrelevant_key_args(arg1:, arg2:, arg3:)
  arg1 + arg2 + arg3
end

p order_irrelevant_key_args(arg1:1, arg3: 2, arg2: 3)
p order_irrelevant_key_args(arg3: 2, arg2: 3, arg1:3,)

def order_irrelevant_optional_key_args(arg1:, arg2:1, arg3:2)
  arg1 + arg2 + arg3
end

p order_irrelevant_optional_key_args(arg1:1)
p order_irrelevant_optional_key_args(arg1:1, arg3:5)

## Parameterization - arguments

def order_matters(arg1, arg2, arg3)
  arg1 + arg2 + arg3
end

p order_matters(1,2,3)

def order_matters_optional(arg1 = 1, arg2 = 2, arg3)
  arg1 + arg2 + arg3
end

p order_matters_optional(3)

## Parameterization - optional (as arr)

## Logically, this order of args is required
## Standard args occur first (and ordering matters)
## It also looks for any key args
## The remainder are optional and specified by * (... in JS)
def optional(arg1, *opt, arg2:)
  total =  arg1 + arg2
  opt.each{|x| total = total + x }
  total
end

p optional(1,2,3,4, arg2: 5)

Note: official Ruby documentation refers to all Functions as Methods (and uses these interchangeably):

  1. https://docs.ruby-lang.org/en/2.0.0/syntax/methods_rdoc.html
  2. https://ruby-doc.com/docs/ProgrammingRuby/html/tut_methods.html

Code samples:

  1. https://github.com/Thoughtscript/rrr_2024/tree/main/ruby/8-closures
  2. https://github.com/Thoughtscript/rrr_2024/tree/main/ruby/2-methods

Ruby: Hashes and Arrays

Hash

A Hash is a dict / array (since they are integer-indexed collections) / map equivalent.

Named Key - Value (e.g. - Objects in JS, dicts in Python).

# hash_example = {}
# hash_example = Hash.new
hash_example = Hash[]
hash_example['a'] = n
hash_example[:b] = 100

Array

Array - an Array proper (e.g. - Key by index Value).

Expands in size - ArrayList like (Java).

arr_example = Array.new() 
arr_example = []
arr_example.push(1)
arr_example.push(2)
arr_example.push(3)
p arr_example.first
p arr_example.last

In Ruby, Array length and size() accomplish the same.

size() (the method) is an alias for the field length.

Added for uniformity since many languages use length (Java, JavaScript) while C++ uses size().

Refer to the Documentation.

Code samples:

  1. https://github.com/Thoughtscript/rrr_2024/blob/main/ruby/10-hashes/main.rb
  2. https://github.com/Thoughtscript/rrr_2024/blob/main/ruby/7-arrays/main.rb

Ruby: Interceptors

  1. before_action - provides the same functionality as a Java Interceptors, executes a function prior to other actions being performed within an HTTP handler.

Ruby: Object Oriented Design

Inheritance

class Animal
  attr_accessor :name

  def initialize(name)
    @name = name
  end

  def speak
    "Hello!"
  end
end

class GoodDog < Animal
  def initialize(name, color)
  super(name)
    @color = color
  end

  def speak
    super + " from GoodDog class"
  end
end

Code samples:

  1. https://github.com/Thoughtscript/rrr_2024/blob/main/ruby/6-ood/main.rb

Ruby: Exception Handling

  1. Use fail instead of raise in circumstances where an Exception will not be handled and rethrown as something more specific.
  2. Use raise otherwise.
  3. rescue is the relevant "catch" keyword.
  4. When defining customized Exceptions inherit from StdErr.

Examples

begin
  p hash_example['a']['b']
rescue Exception # Should generally prefer to check for StdErr
  p "hash_example['a']['b'] throws an exception" 
end
begin
  p hash_example['a']['b']
rescue Exception # Should generally prefer to check for StdErr
  p "hash_example['a']['b'] throws an exception" 
ensure
  p "I'm a finally block"
end
  1. https://bulldogjob.com/readme/ruby-gotchas-that-will-come-back-to-haunt-you

Ruby: Techniques

Strings and Symbols

# Ruby symbols and strings

## Comparison - these are not the same
a = 'hello'
b = :hello
p a == b

## Hash - different keys
x = Hash[]
x['a'] = 1
x[:a] = 2
p x['a'] == x[:a]
p x

## Via try
y = Hash[]
y['b'] = { 'c' => 100 }
p y
p y['b']['c']
p y[:b]&.try(:c)

## Convert
p :a.to_s
p 'a'.to_sym

https://github.com/Thoughtscript/rrr_2024/blob/main/ruby/17-sym_str/main.rb

Remember:

  1. Ruby Strings don’t work like Java's String Pool. The same String content can have two Pointers.
  2. Ruby Symbols that share the same content will have the same Address in memory.
  3. Strings are mutable and Symbols aren't.
  4. It's often easier and better to use Symbols (performance, overhead, immutability-wise) to access, enumerate, or work with Hashes.

https://stackoverflow.com/questions/255078/whats-the-difference-between-a-string-and-a-symbol-in-ruby

Equality and Comparisons

For all intents and purposes one should generally use == ("Generic Equality"):

# At the Object level, == returns true only if obj and other are the same object.
1 == 1.0     #=> true
1.eql? 1.0   #=> false

As opposed to eql? (Hash Equality) which checks by Hash key and/or subtype (as in the case of Numeric comparisons above) or any of: ===, equal?. Generally, these should be avoided due to the specifics of their implementations and occasionally lengthy research into their side-effects.

If required use the other existing comparison operators or Methods:

b.is_a? A         
b.kind_of? A      
b.instance_of? A  

## check identity by object_id
x = 10;
y = 25;
z = x;

puts x.object_id
puts y.object_id
puts z.object_id == x.object_id

Consult: https://stackoverflow.com/questions/7156955/whats-the-difference-between-equal-eql-and and https://ruby-doc.org/3.2.2/Object.html#method-i-eql-3F

Helpful Loop Opertations

each_with_index:

a.each_with_index { |item, index|
  p %Q(#{item} at index: #{index})
}
  1. https://docs.google.com/presentation/d/1cqdp89_kolr4q1YAQaB-6i5GXip8MHyve8MvQ_1r6_s/edit#slide=id.g2048949e_0_38
  2. https://www.toptal.com/ruby/interview-questions
  3. https://www.turing.com/interview-questions/ruby
  4. https://stackoverflow.com/questions/255078/whats-the-difference-between-a-string-and-a-symbol-in-ruby
  5. https://stackoverflow.com/questions/7156955/whats-the-difference-between-equal-eql-and
  6. https://ruby-doc.org/3.2.2/Object.html#method-i-eql-3F

Code samples:

  1. https://github.com/Thoughtscript/rrr_2024
  2. https://github.com/Thoughtscript/rrr_2024/blob/main/ruby/17-sym_str/main.rb

Ruby: Safe Navigation Operators

Consider the example of checking the index of an M x N Array. Specifically, where some row r may be Out of Bounds.

Or, alternatively, checking if a nested field exists on an Object.

Three ways of checking that include are given as follows.

The Overly Verbose Way

if account && account.owner && account.owner.address
    # ...
    # => false
    # => nil
end

ActiveRecord Try

if account.try(:owner).try(:address)
    # ...
    # => false
    # => nil
end

Safe Navigation Operator

if  account&.owner&.address
    # ...
    # => nil
    # => undefined method `address' for false:FalseClass`
end

With Hashes

Given:

hash_example = Hash[]
hash_example['a'] = n
hash_example[:b] = 100

To check for nested a > :b:

hash_example['a']&.try(:b)
hash_example['a']&.try(:[], :b)

Array#dig and Hash#dig

Consider:

address = params[:account].try(:[], :owner).try(:[], :address)
# or
address = params[:account].fetch(:owner) { {} }.fetch(:address)
address = params.dig(:account, :owner, :address)
  1. https://mitrev.net/ruby/2015/11/13/the-operator-in-ruby/

Code samples:

  1. https://github.com/Thoughtscript/rrr_2024/blob/main/ruby/16-safe_navigation/main.rb

Ruby: Truthy Evaluations

In Ruby, only false and nil are evaluated to false.

0, [], etc. all evaluate to true. (Unlike Javascript where 0 evaluates to false.)

  1. https://bulldogjob.com/readme/ruby-gotchas-that-will-come-back-to-haunt-you

Ruby: Visibility and Access

  1. @@ - Class Variable, defines a field that's synchronized between all instances of the Class. For example, counting the number of instantiated copies of a Class that have been created since Application start.
  2. @ - fields must be set and got using getters and setters, same as self.
  3. attr_accessor - fields can be accessed directly with dot notation.
  4. private - a keyword that can be used to indicate that every Closure definition in the scope below be given the private access visibility modifier.

Example

class ExampleClass
  attr_accessor :field_one

  def set_and_get_field_two(arg)
    ## Does not have to be declared as attr_accessor
    ## But cannot be directly accessed in public
    @field_two = arg
    p @field_two
  end

  def get_field_two()
    p @field_two
  end

  def get_field_one
    ## These are the same
    p @field_one
    p self.field_one

    example_private(arg1: @field_one)
  end

  # Everything below is given the 'private' access modifier
  private

  def example_private(arg1:)
    p arg1
  end
end

## Call 
### Constructor
e = ExampleClass.new
### Set attr_accessor field
e.field_one = 2
### Getter for @
e.get_field_one

Code samples:

  1. https://github.com/Thoughtscript/rrr_2024/blob/main/ruby/12-access/main.rb

Ruby: Common Rails Commands

Start

rails db:create
rails db:migrate
rake db:seed
rails server --binding=127.0.0.1

By default, the Ruby on Rails serve will serve from: http://localhost:3000/

Migrations

bin/rails generate migration ExampleMigration
rails db:migrate
# rake db:migrate

Reset DB

# run migration and seeding
rails db:setup 
# rails db:create
# rails db:migrate
# rake db:seed

rails db:reset

Create Model and Table

rails g model Dinosaur name:text
rails g model BabyDino name:text

Create Controller

rails g controller Dinosaurs

Code samples:

  1. https://github.com/Thoughtscript/rrr_2024/tree/main/ruby

Ruby: Active Record

Active Record FK Example

Schema

ActiveRecord::Schema[7.1].define(version: 2024_07_05_224153) do
    # These are extensions that must be enabled in order to support this database
    enable_extension "plpgsql"

    create_table "examples", force: :cascade do |t|
      t.text "name"
      t.datetime "created_at", null: false
      t.datetime "updated_at", null: false
    end

    create_table "jsonexample", id: false, force: :cascade do |t|
      t.integer "id"
      t.json "json_col"
      t.json "json_array_col"
      t.jsonb "jsonb_col"
      t.jsonb "jsonb_array_col"
    end

    create_table "sub_examples", force: :cascade do |t|
      t.bigint "example_id"
      t.text "name"
      t.datetime "created_at", null: false
      t.datetime "updated_at", null: false
      t.index ["example_id"], name: "index_sub_examples_on_example_id"
    end

  end

From: https://github.com/Thoughtscript/rrr_2024/blob/main/rails/web/db/schema.rb

Models

class SubExample < ApplicationRecord
    attribute :name, :string

    # Does not need to be explicitly set
    # attribute :id, :integer
    # self.primary_key = :id

    # Does not need to be explicitly set
    # attribute :example_id, :integer
    belongs_to :example, class_name: "Example", inverse_of: :sub_examples

    validates :name, presence: true
end

From: https://github.com/Thoughtscript/rrr_2022/blob/main/_ruby/web/app/models/baby_dino.rb

class Example < ApplicationRecord
    # Remember that Rails ActiveRecord uses attributes here!
    # Distinct from DTO's.  
    attribute :name, :string

    # Does not need to be explicitly set
    # #attribute :id, :integer
    # self.primary_key = :id

    has_many :sub_examples, inverse_of: :example

    validates :name, presence: true

    def msg
      "Test Message!"
    end
end

From: https://github.com/Thoughtscript/rrr_2024/blob/main/rails/web/app/models/example.rb

Java: General Concepts

  1. Java Virtual Machine - run Java on any machine or underlying environment.
    • Abstracts the Java runtime environment so Java can be executed uniformly anywhere.
  2. Java Heap, Stack, and Garbage Collecting
    • Heap - shared memory allocated for use by the JVM and applications running within it.
    • Garbage Collecting - the automatic or manually configured/triggered periodic removal of items in memory.
  3. Write Time, Compile Time, and Runtime
    • Checked Exceptions and Static methods are handled/checked at Write Time, Compile Time
    • Unchecked Exceptions and Non-Static methods are handled/created/checked at Runtime
  4. .java files, compiled Byte Code, and .class files
  5. Impure Object Oriented Design - Primitive Data Types don't inherit from the top-level Object Class.
  6. Reference Types - wrappers for the Primitive Data Types that inherit from the Object Class and therefore can take null values, .stringify(), etc.
  7. Statically Typed - Types are strictly enforced in Parameters, Arguments, Method Signatures, return Types. Limited Auto-boxing.

Top Java 8 Features

  1. Lambdas and Functional Interfaces
  2. CompletableFuture
  3. Stream API

Top Features Since Java 8

  1. sealed keyword
  2. Records
  3. Virtual Threads
  4. Better Heap and JVM memory management.

Java: Comparisons

  1. Primitive Comparison (==) - used to compare the equivalence of two Primitive Data Types.
  2. Object Comparison (equals()) - used to compare the equivalence of two Objects.
String a = "abc";
String b = "abc";

(a == b); //true - both point to the same reference
a.equals(b); //true - both are the same object value sand reference

Type Checking

Applies to descendants of the Object Class:

Person p = new Person();
System.out.println(p instanceof Person); // true

Comparators

Recall that comparisons typically return:

  1. -1 - some A shoud precede some B
  2. 0 - there's no reason for A not to precede some B
  3. 1 - some B should precede some A

In the example below, a Comparator is implemented as a Lambda to sort Products per the following:

  1. Sort by Availability (boolean) descending
  2. If tied, sort by DiscountedPrice (Double) ascending
  3. If still tied, sort by Id (Long) ascending
List<Product> products = new ArrayList<Product>();

products.sort((a, b) -> {
    boolean A = a.getAvailability();
    boolean B = b.getAvailability();
    if (A && !B) return -1;
    if (B && !A) return 1;

    Double AA = a.getDiscountedPrice();
    Double BB = b.getDiscountedPrice();
    if (AA < BB) return -1;
    else if (BB < AA) return 1;

    Long AAA = a.getId();
    Long BBB = b.getId();
    if (AAA < BBB) return -1;
    else if (BBB < AAA) return 1;
    return 0;
});

Java: References

  1. Shallow Copying in Java: set a value without using the new keyword for anything that's not a Primitive Data Type.
  2. Java has no concept like Pass by Reference (unlike JavaScript) but there are some quirks.

Pass by Reference(-ish)

Given:

class ReferenceTest {
    int num;

    ReferenceTest(int x) {
        num = x;
    }

    ReferenceTest() {
        num = 0;
    }
}

public class ReferenceExamples {

    // Remember that Java scope can run counter-intuitively against the expected value.
    public static void examples() {
        ReferenceTest rt = new ReferenceTest(20);
        updateNewReference(rt);
        System.out.println("I'm the outer scope: " + rt.num); // 20 - returns the supplied value above, not 50
        update(rt);
    }

    public static void updateNewReference(ReferenceTest rt) {
        // Restricted by scope, the new value is set within update()
        rt = new ReferenceTest();
        rt.num = 50;
        System.out.println("I'm the inner scope - new reference: " + rt.num);
    }

    public static void update(ReferenceTest rt) {
        rt.num = 50;
        System.out.println("I'm the inner scope - same reference: " + rt.num);
    }
}

Calling examples() will produce the following:

I'm the inner scope - new reference: 50
I'm the outer scope: 20
I'm the inner scope - same reference: 50

Remember to:

  1. Don't create a new reference in memory (via the new keyword) if you just want to modify the value of something you've passed.
  2. Return the exact item you want to return if you intend on reusing a variable.
  1. https://www.interviewbit.com/java-interview-questions/#does-java-pass-by-value-or-pass-by-reference

Java: Beans

Beans are Encapsulated, reuseable, resources that are created and typically initialized as Singletons in an Application Context.

Some specialized Beans (like the Java EE Message Driven Bean) are not Singletons and respond to triggering messages instead:

  1. They don't maintain a Synchronized or common state throughout the Application.
  2. Today, Message Driven Beans have been widely replaced by other Publish-Subscribe, Event Listening, or Streaming alternatives.

They can then be used anywhere within the Application provided they are correctly configured to do so.

Lifecycle

Within Spring, (customized) Beans are configured using the @Bean annotation within a @Configuration Configuration Class. In Java EE, Beans were traditionally defined using an XML file (and are often defined today using annotations similar to the way that Servlets once were nearly universally defined using XML files and are now often defined programmatically using annotations).

They are then Injected into a Service (@Service) or Component (@Component) typically using the @Autowired annotation (Decorator). Spring Components are Beans themselves (and Services are a kind of Component). Spring knows to look for Components automatically from the @SpringBootApplication or @EnableWebMvc annotation.

  1. Beans are then Initialized at Run Time.
  2. Services that use them as a dependency can modify their state, make calls using their methods, etc.
  3. They are destroyed when the Application Context is shut down or destroyed.

Java: Object Oriented Design

Java uses a top-level Object Class that all non-primitive Types inherit from.

Object Oriented Concepts

Java uses:

  1. Classes (templates, types, or kinds of things that Objects are or that Objects instantiate)
  2. Objects (specific instances, copies, or particular instantiations of Classes)
public class Example {
    private String id;

    // non-parameterized constructor
    public Example() {}

    // parameterized constructor
    public Example(String id) {
        setId(id);
    }

    // getters and setters
    public String getId() {
        return this.id;
    }

    public void setId(String id) {
        this.id = id;
    }
}
Example x = new Example();
Example x = new Example("fsfsf");

Encapsulation

Encapsulation refers to enclosing all the functionalities of an object within that object so that the object’s internal workings (its methods and properties) are hidden from the rest of the application.

Getters and Setters, Access Modifiers, and Packages are some of the primary ways Encapsulation is achieved in Java.

Encapsulation: boundaries, visibility, and access are restricted.

Aggregation

Aggregation refers to one Class having another Class as a field or as belonging to another Class as nested Classes.

Classes can be defined and combined within Classes, a kind of nesting.

Instantiating the Inner Class results in the Outer Class also being instantiated.

public class A {
    //...
}

public class B {
    A a;
}

A nested Class may need to be Static in order to effectively outlive the wrapping Outer Class. Failing to do so can introduce memory leaks.

public interface WidgetParser {
    Widget parse(String str);
}

public class WidgetParserFactory {
    public WidgetParserFactory(ParseConfig config) {
        //...
    }

    public WidgetParser create() {
        new WidgetParserImpl(//...);
    }

    private class WidgetParserImpl implements WidgetParser {
        //...

        @Override 
        public Widget parse(String str) {
            //...
        }
    }
}

Since WidgetParserImpl isn't Static, if WidgetParserFactory is discarded after WidgetParser is created, memory leaks can ensue. Using the static keyword here implies that no instances of WidgetParserImpl need to be instantiated at all (and hence, can avoid the issue of lingering nested Objects).

From: https://www.toptal.com/java/interview-questions and https://www.javatpoint.com/static-nested-class

Note that Composition is a specific, stronger, variant of Aggregation where both the Outer and Inner Classes are tightly coupled: they are created and destroyed together.

From: https://www.geeksforgeeks.org/association-composition-aggregation-java/

Inner and Outer Class Initialization

public class A {
    public class B {
        public String runMe() {
            //...
        }
    }
}
A outerObject = new A();
A.B b = outerObject.new B();
System.out.println(b.runMe());

From: https://docs.oracle.com/javase/tutorial/java/javaOO/nested.html

Refer to: https://github.com/Thoughtscript/java-refresh-notes/blob/master/src/io/thoughtscript/refresh/Main.java

Inheritance

Inheritance:

  1. Classes can be subclassed (e.g. - kinds, kinds of a kind).
  2. The attributes available on parent Classes or prototypes are available on their children.
  3. Inheritance occurs via the extends keyword. Java does not support multiple inheritance in Classes (but it does for Interfaces).

Remember that basic initialization obeys the following general schema:

[Superclass] x = new [Type Inheritor of Superclass]();

super:

  1. Use the super keyword to access Superclass properties.
  2. Call super() to invoke the Superclass constructor (which can be parametized).
  3. Or, use super to invoke a parent Class method like so Collection.super.stream().skip(1);

Polymorphism

Polymorphism: an Object may belong to multiple Classes or exhibit multiple kinds.

Since Java does not directly support Multiple Inheritance in Classes, Polymorphism with respect to Class Inheritance is only accomplished by extending a Class and implementing one or more Interfaces, implementing two or more Interfaces, or by extending a Class that is also extending many other Classes.

Thus, a Class that implements multiple Interfaces is said to exhibit Polymorphism.

public interface Mammal extends Animal, Biologic {
    //...
}

public class Adam extends Human implements Mammal, Developer {
    //...
}

Records

Records - specified by the record keyword - syntactic sugar to specify an immutable POJO / DTO.

Example:

record Car(int seats, String color) {}
  1. Automatically generates Getters and initializes field values.
  2. Automatically generates the AllArgs Constructor.
  3. Can define instance methods and custom Constructors.
  4. Records cannot use an explicit extends keyword:
    • All Record Classes are final, so we can’t extend it.
    • All Record Classes implicitly extend java.lang.Record Class.
  5. All the fields specified in the record declaration are final.
  1. https://www.toptal.com/java/interview-questions
  2. https://www.javatpoint.com/static-nested-class
  3. https://www.geeksforgeeks.org/association-composition-aggregation-java/
  4. https://docs.oracle.com/javase/tutorial/java/javaOO/nested.html
  5. https://www.digitalocean.com/community/tutorials/java-records-class

Code samples:

  1. https://github.com/Thoughtscript/java-refresh-notes/tree/master/src/io/thoughtscript/refresh/ood
  2. https://github.com/Thoughtscript/java-refresh-notes/blob/master/src/io/thoughtscript/refresh/Main.java
  3. https://github.com/Thoughtscript/java-refresh-notes/blob/master/src/io/thoughtscript/refresh/records/RunRecordExample.java

Java: Enums

Basic Example

Given:

public class Pet {

    public enum PetType {DRAGON, DOG, CAT, BULL, JARBUL}

    private PetType type;


    public long getPet_id() {
        return pet_id;
    }

    public void setPet_id(long pet_id) {
        this.pet_id = pet_id;
    }

    //...
}
Pet pet = new Pet();
pet.setType(Pet.PetType.DRAGON);

Advanced Topics

Remember that you can define:

  1. Fields
  2. Methods
  3. Constructors

within Enums.

As such one can also implement Singletons.

  1. https://www.baeldung.com/a-guide-to-java-enums

Code samples:

  1. https://github.com/Thoughtscript/java-refresh-notes/blob/master/src/io/thoughtscript/refresh/sealed/SealedExample.java

Java: Dynamic Proxying

Java Proxy Classes (Java Proxies):

  1. Are encouraged in Clean Code.
  2. Can be thought of as a Facade (a Wrapper with additional functionality to simplify use).
  3. Often target some range of Interface implemenations.
  4. Used to invoke or execute some additonal operation or imbue an operation with extra functionalities. (Additional logging, intercepting a method (via InvocationHandler), etc.)
  1. https://www.baeldung.com/java-dynamic-proxies
  2. https://www.logicbig.com/tutorials/core-java-tutorial/java-dynamic-proxies/method-interceptors.html
  3. https://opencredo.com/blogs/dynamic-proxies-java-part-2/
  4. https://codegym.cc/groups/posts/208-dynamic-proxies

Java: Abstraction

  1. Interfaces and Abstract Classes allow for the general structure of Implementing Classes to be planned out and further specified in those implementations.
  2. Interfaces are lighter-weight and impose fewer suppositions on implementing Classes than Subclasses of Abstract Classes.
  3. Abstract Classes typically contain fully-defined Methods that are then Overridden whereas Interfaces only contain Method signatures.
  4. Both Interfaces and Abstract Classes can have fields defined within them that are borne by their Implementing or Subclassing Classes.

Interface

  1. Interfaces support Multiple Inheritance (whereas Classes don't).
  2. Methods defined in Interfaces are Public and Abstract by default. Typically, only their signature is defined.
  3. Are Implemented or Extended with the implements or extends keywords, respectively.
  4. Fields defined in an Interface are public static final by default.
public interface ExampleA {
    public void methodA();
    void methodB();
}
public interface ExampleB extends ExampleA {
    void methodC();
}
public class ExampleBImpl implements ExampleB {
    public void methodA() {
        System.out.println("I'm methodA");
    }

    public void methodB() {
        System.out.println("I'm methodB");
    }

    public void methodC() {
        System.out.println("I'm methodC");
    }
}

Functional Interfaces

Lambdas can also be used with Functional Interfaces (which provide the most amount of customization when using lambda expressions):

public class HelloWorld{

    @FunctionalInterface
    interface CheckStringInterface {
        boolean checkStringLambda(String s);
    }

    private static boolean checkString(String s) { 
        return ((s != null) && (s != "" ) && (Character.isUpperCase(s.charAt(0))));    
    }

    private static CheckStringInterface checkStringInstance = 
        (String s) -> ((s != null) && (s != "" ) && (Character.isUpperCase(s.charAt(0))));

    public static void main(String []args){
        System.out.println(checkStringInstance.checkStringLambda("Test"));
        System.out.println(checkStringInstance.checkStringLambda("Alpha"));
        System.out.println(checkString("test"));
    }
}

We observe how the Lambda expression allows us to implement the checkStringLambda() method as we see fit. The same parameters must be retained in a specific implementation but we can go wild with whatever we want right of the arrow:

public class HelloWorld{

    @FunctionalInterface
    interface CheckStringInterface {
        boolean checkStringLambda(String s);
    }

    private static CheckStringInterface checkStringInstanceOne = 
        (String s) -> ((s != null) && (s != "" ) && (Character.isUpperCase(s.charAt(0))));

    private static CheckStringInterface checkStringInstanceTwo = 
        (String s) -> (s != null);

    public static void main(String []args){
        System.out.println(checkStringInstanceOne.checkStringLambda("Test"));
        System.out.println(checkStringInstanceTwo.checkStringLambda("Alpha"));
    }
}

@FunctionalInterface must only be used on an Interface with one Method. A Functional Interface is an Interface with a single method to implement and allows Java to have a degree of Functional Programming in what is otherwise resolutely Object Oriented.

Abstract Classes

  1. Abstract Classes are not themselves instantiated (although they can contain a Constructor) - they are Subclassed and instantiated thereby.
  2. Methods belonging to an Abstract Class can be Abstract and they can be Overridden.
  3. Abstract Classes are Subclassed using the extends keyword and their methods are Overridden using the @Override annotation.
public abstract class ExampleA {

    public void methodA() {
        System.out.println("I'm methodA from within ExampleA");
    }

    abstract void methodB();
}
public class ExampleB extends ExampleA {

    @Override
    public void methodA() {
        System.out.println("I'm methodA from within ExampleB");
    }

    public void methodB() {
        System.out.println("I'm methodB");
    }
}

Java: Singletons

A Singleton is a Class that's instantiated once and reused everywhere as that, single, copy.

Not Thread Safe

   // Sloppy not thread safe - HackerRank
   class Singleton{
      private static final Singleton instance = new Singleton();
      public static String str;

      private Singleton() {}

      public static Singleton getSingleInstance() {
        return instance;
      }
   }

Thread Safe Implementations

   import java.util.concurrent.Semaphore;

   // Refer to: https://www.digitalocean.com/community/tutorials/thread-safety-in-java-singleton-classes
   public class ThreadSafeSingleton {

      // Requires the CPU not to reorder when the variable is read using volatile.
      // Avoids a scenario where prior to initialization the variable might
      // be null in a secondary Thread.
      private static volatile ThreadSafeSingleton instance;

      // Private constructor for Singleton
      private ThreadSafeSingleton() {}

      public static ThreadSafeSingleton getOrCreateInstance() {
        // Separate out the volatile variable into another copy so no
        // copies are read by different threads here.
        ThreadSafeSingleton result = instance;

        if (result == null) {
              // synchronized
              synchronized (ThreadSafeSingleton.class) {
                result = instance;
                if (result == null) {
                    instance = result = new ThreadSafeSingleton();
                }
            }
        }
        return result;
      }

      // Mutex - initialize beforehand in a static context
      // Obviously, since a Singleton is only a single instance don't put Mutex's in a Main method or above as a field:
      // e.g. - private final Semaphore mutex = new Semaphore(1);
      public static ThreadSafeSingleton getOrCreateInstanceWithSemaphore(Semaphore mutex) throws InterruptedException {
        // Separate out the volatile variable into another copy so no
        // copies are read by different threads here.
        ThreadSafeSingleton result = instance;

        if (result == null) {
            if (mutex.availablePermits() > 0) {
                mutex.acquire();
                result = instance;
                if (result == null) {
                    instance = result = new ThreadSafeSingleton();
                }
                mutex.release();
            }
        }
        return result;
      }
   }

Code samples:

  1. https://github.com/Thoughtscript/java-refresh-notes/tree/master/src/io/thoughtscript/refresh/patterns

Java: Visibility and Access Modifiers

Access Modifiers

  1. public - Everywhere within the Application, all Subclasses.
  2. protected - Same Package, Subclasses regardless of Package.
  3. package/none/default - Same Package, Subclasses in same Package.
  4. private - Class and Object only, no Subclasses.

Static vs Nonstatic

  1. static - belongs to the Class within which it resides.

  2. non-static - belongs to the Object/instance and is checked at runtime.

    A Non-static method belongs to the specific Object/instantiation of a particular Class. Non-static methods thus require an Object created via the constructor keyword new and are invoked directly from that created Object.

Final

  1. The final keyword specifies that a variable is immutable, a constant.
  2. The final keyword specifies that a method cannot be Overridden.
  3. The final keyword specifies that a class cannot be Extended / Subclassed.

Sealed

Specifies that only certain Classes can inherit from or implement from the sealed Class or Interface.

  1. An optional and potentially intermediate access visibility modifier-like keyword.

  2. Specifies the exact Subclasses that can Subclass. (Even the Default/Package can be unduly permissive – consider Classes that have encryption hashes/ciphers/salts.)

  3. Or, Classes that can implement an Interface.

  4. sealed Class constraints:

    • Permitted Subclasses must belong to the same module as the sealed Class.
    • Every permitted Subclass must explicitly extend the sealed Class.
    • Every permitted Subclass must define a modifier: final, sealed, or non-sealed.
  5. Generally, a sealed-type hierarchy can have a Class or an Interface as its root.

    • The remainder of the hierarchy can contain Classes or Interfaces, provided all leaf nodes of the hierarchy are either final concrete Classes or are non-sealed.
    • If a leaf element is non-sealed, it can be either a Class or an Interface.
  6. Example hierarchy from https://blogs.oracle.com/javamagazine/post/java-sealed-types-subtypes-final:

    sealed interface Top permits A, B, C {}
    non-sealed class A implements Top {}
    record B(Top s) implements Top {}
    enum C implements Top { SUB_A{}, SUB_B{} }

Style Guides

  1. https://blogs.oracle.com/javamagazine/post/java-comments-reduce-technical-debt
  2. https://google.github.io/styleguide/javaguide.html#s3-source-file-structure
  1. https://blogs.oracle.com/javamagazine/post/java-comments-reduce-technical-debt
  2. https://google.github.io/styleguide/javaguide.html#s3-source-file-structure
  3. https://blogs.oracle.com/javamagazine/post/java-sealed-types-subtypes-final
  4. https://blogs.oracle.com/javamagazine/post/java-quiz-sealed-type-records
  5. https://www.baeldung.com/java-sealed-classes-interfaces

Code samples:

  1. https://github.com/Thoughtscript/java-refresh-notes/tree/master/src/io/thoughtscript/refresh/visibility
  2. https://github.com/Thoughtscript/java-refresh-notes/blob/master/src/io/thoughtscript/refresh/sealed/SealedExample.java

Java: Checked and Unchecked Exceptions

Some Java Exceptions are Checked - they require a throws keyword or a try-catch block at Write Time, Compile Time.

Others aren't. They are Unchecked - handled and Thrown at Run Time.

Examples:

  1. ArrayIndexOutOfBoundsException is Unchecked.
  2. NullPointerException is Unchecked.
  3. Parsing format exceptions are typically Checked. e.g. - ParseException

Handling

Interesting scenario:

class BatteryException extends Exception { }
class FuelException extends Exception { }
public class Car {
  public String checkBattery() throws BatteryException {
    // implementation
  }
  public String checkFuel() throws FuelException {
    // implementation
  }
  public String start() {
    try {
      checkBattery();
      checkFuel();
    } catch (BatteryException be) {
      return "BadBattery";
    } finally {
      return "";
    }
  }
}
  1. One might be tempted to think the compiler refuses to compile since FuelException isn't caught. However, the presence of the finally clause overrules that typical requirement.
  2. One might also be tempted to think that "BadBattery" is returned or FuelException is thrown from start(). Again, the presence of the finally clause overrules that incorrect but informed intution. (Regardless of the implementation of checkBattery() and checkFuel() the end result will always be the same.)

In the above scenario: start() will always return a "".

From: https://blogs.oracle.com/javamagazine/post/java-quiz-try-catch-finally-exception

  1. https://blogs.oracle.com/javamagazine/post/java-quiz-try-catch-finally-exception

Code samples:

  1. https://github.com/Thoughtscript/java-refresh-notes/blob/master/src/io/thoughtscript/refresh/finallyblock/FinallyExample.java

Java: Errors

Fatal Errors at Runtime will terminate an Application killing the Process.

Exceptions, by contrast, are undesired or predicted defects in some code that are Thrown, Caught, and Handled at Compile Time and Run Time.

Finally

A finally block may be reached and executed when a fatal system Error occurs or if the Process is killed early (deliberately).

// JavaScript
process.exit()
// Java
System.exit(0);

Java: Arrays

Use length to access the size of the Array.

Java Arrays are fixed in their size after initialization.

Initialization

// With dimensions/size using this syntax.
// The default way
int[] intArrA = new int[8];
int[] intArrB;

Array initialization with values:

int[] intArrA = {1,2,3,4000,707,707,3,3};

Reassignment

Note that an Array A that's been assigned a size a > b can be reassigned with an Array value of size b without limitation.

int[] A = {1,2,3,4,5,6,7,8};
int[] B = {5,5,5,5,5};
A = B;

for (int i = 0; i < A.length; i++) {
    System.out.println(A[i]);
}

/*
5
5
5
5
5
*/

Thus, initializing an Array like so:

int[] C;
C = A;

for (int i = 0; i < C.length; i++) {
    System.out.println(C[i]);
}

/*
5
5
5
5
5
*/

can help to reduce invalidly declared Arrays.

Reference Types

Remember that primitive Arrays aren't Autoboxed to their reference types. (Also, that the type of an Array is the type of each element.) (Each individual element can be in the right circumstances, however.)

Consequently, int[] won't be Autoboxed to Integer[] nor Object[] and so doesn't meet the constraints of either an Object[] o parameter nor the generic <T>, T[].

public static <T> void printArray(T[] o) {
    for (int i = 0; i < o.length; i++) {
        System.out.println(o[i]);
    }
}

public static void main(String[] args) {
    // Remember that int[] isn't autoboxed to Integer[] 
    // only each int[i]

    // Also Object[] requires that int[] would be autoboxed to Integer[] first
    // So, either set that explicitly or convert
    Integer[] intArr = {1, 2, 3};
    String[] strArr = {"Hello", "World"};
    printArray(intArr);
    printArray(strArr);
}

To and From ArrayLists, Lists

To ArrayList:

Integer[] array = {1, 2, 3};

List<Integer> listA = List.of(array);

List<Integer> listB = Arrays.asList(array);

ArrayList<Integer> listC = new ArrayList<Integer>(Arrays.asList(array));

And back again:

// Will create an Array of length equal to the Collection size
Integer[] arrA = listA.toArray();

// Faster zero-size Array
// Better for Casting and Generics
// Use this even over .toArray(new Integer[listB.size()])
Integer[] arrB = listB.toArray(new Integer[0]);

Refer to: https://www.baeldung.com/java-collection-toarray-methods

Concatenation

Solutions that work for both Object and Primitive Type Arrays:

int[] arrA = {1, 2, 3};
int[] arrB = {4, 5, 6};
int[] result = new int[arrA.length+arrB.length];

for(int i = 0; i < arrA.length; i++) {
    result[i] = arr1[i];
}

for(int i = 0;i < arrB.length; i++) {
    result[arrA.length + i] = arrB[i];
}

java.util.Arrays - copyOf():

import java.util.Arrays;

//....
int[] arrA = {1, 2, 3};
int[] arrB = {4, 5, 6};
int[] result = Arrays.copyOf(arrA, arrA.length + arrB.length);

System.arraycopy(arrB, 0, result, arrA.length, arrB.length);

org.apache.commons.lang.ArrayUtils - addAll():

import org.apache.commons.lang.ArrayUtils;

int[] arrA = {1, 2, 3};
int[] arrB = {4, 5, 6};

int[] result = ArrayUtils.addAll(arrA, arrB);

Sorting

Refer to the article on Comparisons.

int[] arrA = {5, 6, 3, 9, 4, 5, 6,1, 2, 3};

// Natural sort
Arrays.sort(arrA);

parallelSort() is performant at element sizes greater than 1,000,000.

Otherwise, it's likely less performant than a standard sort().

Arrays.parallelSort(arrA);
  1. https://www.tutorialkart.com/java/java-array/java-concatenate-arrays/
  2. https://dev.to/khmarbaise/stream-concat-vs-new-arraylist-performance-within-a-stream-4ial
  3. https://www.geeksforgeeks.org/serial-sort-vs-parallel-sort-java/
  4. https://www.baeldung.com/java-arrays-sort-vs-parallelsort
  5. https://www.baeldung.com/java-collection-toarray-methods

Code samples:

  1. https://github.com/Thoughtscript/java-refresh-notes/tree/master/src/io/thoughtscript/refresh/arrays
  2. https://github.com/Thoughtscript/java_algos/blob/main/src/main/java/io/thoughtscript/conversions/Conversions.java

Java: Collections

  1. Remember that Collections require Reference Types not Primitive Types: e.g. Set<Integer> vs. Set<int>.

List

List<String> list = new ArrayList<String>();
    list.add("A");
    list.add("B");
    list.add("C");
  1. indexOf() is a List method not available on Arrays (without conversion).
  2. Use LinkedList for more expedient modifications.
// Conversion between int array to List

// Reference type from primitive - either stream or use reference type
Integer[] intArrA = {1,2,3,4000,707,707,3,3};

// Must use reference type 
List<Integer> intList = Arrays.asList(intArrA);

// Or to ArrayList
ArrayList<Integer> listB = new ArrayList<Integer>(Arrays.asList(intArr));

UnmodifiableList

List<String> listC = new ArrayList<>(Arrays.asList("one", "two", "three"));
List<String> unmodifiableList = Collections.unmodifiableList(listC);
// unmodifiableList.add("four"); // unmodifiableList is immutable!

// (if listC is modified so will the view unmodifiableList - e.g:
// listC.add("four");)

For more:

  1. https://www.baeldung.com/java-immutable-list
  2. https://github.com/Thoughtscript/java-refresh-notes/blob/master/src/io/thoughtscript/refresh/lists/ListExamples.java
  3. https://blogs.oracle.com/javamagazine/post/java-lists-view-unmodifiable-immutable

Set

Set<Integer> setC = new HashSet<>();
    setC.add(0);
    setC.add(1);
    setC.add(2);
    setC.add(3);
    setC.add(4);

bool x = setC.contains(0);

Map

// ...<Key, Value>
Map<String, String> hmA = new HashMap<String, String>();
    hmA.put("0", "A");
    hmA.get("0"); // null or Value
    hm.containsKey("0"); // true if Key is present, false otherwise

HashMap

  1. Java Collections HashMap is a basic implementation of Map.
  2. It uses hashCode() to estimate the position of a Value and to create hashed Keys (using backing buckets to store hashes in a Hash Table). The number of buckets used is called the capacity (not to be confused with the number of Keys or Values).
  3. It accepts null as a Key and isn't Thread Safe.
  4. Keep the Key range small to avoid unnecessarily large buckets in-memory.
  5. Generally, Keys should be immutable. If a Key-Value pair requires alteration, evict and remove the previous Key and create a new entry.

Note that a HashSet has Objects as Values that are hashed into Keys using a backing HashMap. This guarantees uniqueness/deduplication.

  1. https://www.baeldung.com/category/java/java-collections
  2. https://www.baeldung.com/java-hashmap
  3. https://www.baeldung.com/java-immutable-list
  4. https://blogs.oracle.com/javamagazine/post/java-lists-view-unmodifiable-immutable

Code samples:

  1. https://github.com/Thoughtscript/java-refresh-notes/tree/master/src/io/thoughtscript/refresh/collections
  2. https://github.com/Thoughtscript/java-refresh-notes/blob/master/src/io/thoughtscript/refresh/lists/ListExamples.java

Java: Streams

Streams are:

  1. Immutable

  2. Used once.

    import java.util.stream.Collectors;
    import java.util.stream.Stream;
    
    public class Example {
       public static void main(String args[]) {
          Stream<Integer> myStream = Stream.of(1,2,3,4,8,7,6,5,4);
          myStream.sorted((a, b) -> a - b);
          myStream.map(x -> x + 1).collect(Collectors.toList());
       }
    }
    Exception in thread "main" java.lang.IllegalStateException: stream has already been operated upon or closed
    //...

List

From List:

List<Integer> myList = new ArrayList<Integer>();

for(int i = 1; i< 10; i++){
    myList.add(i);
}

Stream<Integer> myStream = myList.stream();

To List:

Stream<Integer> myStream = //...

// Faster way to do this: myStream.collect(Collectors.toList());
List<Integer> myList = myStream.toList();

Array

From Array:

Integer[] myArr = {1, 2, 3, 4, 5};

Stream<Integer> myStream = Arrays.stream(myArr);

To Array:

Stream<Integer> myStream = //...

Integer[] myArr = myStream.toArray(Integer[]::new);

Examples

Sort using sorted to return the third lowest Example id - required to use a Stream.

Given:

public class Example {
    private int id;
    private String name;

    // Constructor
    //... Getters and Setters
}

Example One

Stream<Example> myStream = //...

// Implements a Comparator
Stream<Example> sorted = myStream.sorted((a, b) -> a.getId() - b.getId());

// toList() via Collector
List<Example> myList = sorted 
  .collect(Collectors.toList());

myList.get(3);

Example Two

From a List:

List<Example> myList = //...

// Don't modify myList in place
Integer[] myArr = myList.toArray(new Integer[0]);

Stream<Example> myStream = Arrays.stream(myArr);

// Implements a Comparator
Stream<Example> sorted = myStream.sorted((a, b) -> a.getId() - b.getId());

// toList() via Collector
List<Example> myList = sorted 
  .collect(Collectors.toList());

myList.get(3);

Example Three

From a List working with result Array:

List<Example> myList = //...

// Don't modify myList in place
Integer[] myArr = myList.toArray(new Integer[0]);

Stream<Example> myStream = Arrays.stream(myArr);

// Implements a Comparator
Stream<Example> sorted = myStream.sorted((a, b) -> a.getId() - b.getId());

Integer[] myArr = myStream.toArray(Integer[]::new);

myArr[3];

Parallel Streams

Use when order doesn't matter.

Leverages underlying worker pooling from the Common Worker Pool:

List<Integer> myList = Arrays.asList(1, 2, 3, 4);

myList.parallelStream().forEach(num ->
    System.out.println(myList + " " + Thread.currentThread().getName())
);

Common Operations

forEach

list.stream().forEach(consumer);
Stream<Example> myStream = //...
myStream.forEach(System.out::print);
myStream.forEach(x -> System.out.println(x));

map

list.stream().map(x -> x + 1);
Stream<Example> myStream = //...
myStream.map(x -> x + 1);

Reference Type to Primitive Type

Use IntStream, LongStream, or DoubleStream for any primitive numeric typed Stream.

Integer[] myArr = {1, 2, 3, 4, 5};

Stream<Integer> myStream = Arrays.stream(myArr);

IntStream is = s.mapToInt(i -> i);

Eliding

Streams will occasionally omit certain internal operations if:

  1. Doing so does not modify the outcome or result of the computation.
  2. The Stream is passed into some kind of dynamic processing where the size or count cannot be automatically determined beforehand.

Logging and output messages, in particular, will occasionally fail to display.

Refer to:

  1. https://docs.oracle.com/javase/10/docs/api/java/util/stream/Stream.html
  2. https://blogs.oracle.com/javamagazine/post/java-quiz-stream-api-side-effects
  1. https://www.baeldung.com/java-when-to-use-parallel-stream
  2. https://howtodoinjava.com/java/stream/java-streams-by-examples/
  3. https://www.geeksforgeeks.org/arrays-stream-method-in-java/
  4. https://www.baeldung.com/java-collection-stream-foreach
  5. https://blogs.oracle.com/javamagazine/post/java-quiz-generics-primitives-autoboxing
  6. https://docs.oracle.com/javase/10/docs/api/java/util/stream/Stream.html
  7. https://blogs.oracle.com/javamagazine/post/java-quiz-stream-api-side-effects

Code samples:

  1. https://github.com/Thoughtscript/java_algos/blob/main/src/main/java/io/thoughtscript/conversions/Conversions.java
  2. https://github.com/Thoughtscript/java-refresh-notes/tree/master/src/io/thoughtscript/refresh/streams

Java: Primitive Types

Each Primitive Type has a corresponding Reference Type which wraps the underlying Primitive Type with additional functionality. Furthermore, Reference Types allow null values where they wouldn't be typically allowed (e.g. an int has no null value under normal circumstances - its default value is 0).

Note: Strings are essentially Reference Types with char as their corresponding Primitive Type. Note further: char has Character as its correspond Reference Types.

int

int types must be instantiated with a value anywhere but a field in or on a Class itself:

public class Metrics {
    private static int total;

    public static void main(String args[]) {
        System.out.println(total);
    }
}

Otherwise, initialize the variable with a value:

public class Metrics {    
    public static void main(String args[]) {
        int total = 0;
        System.out.println(total);
    }
}

Note that int can't support precise fractional division without rounding, using a String, or some LCD algo (Least Common Denominator).

Java: Strings, StringBuffer, StringBuilder

  1. Java Strings use the String Pool, are immutable, and are Primitive Data Types.
  2. StringBuilder does not use Java's String Pool, isn't Thread Safe, and is better (performance and memory-wise) for concatenating complex and lengthy Strings. (StringBuilder is used automatically under the hood now for simple String concatenations like String c = a + b + ":".)
  3. StringBuffer is the Thread Safe version of StringBuilder.

String Pool

Java's String Pool points Strings with the same value to the same item in-memory.

This optimization feature is designed to offset some performance costs associated with the immutability of Java's Strings and is referred to as Interning.

Interning can be manually set via String.intern():

String x = new String("abc");
String y = x.intern();
// Will be the same
// Useful when using a String constructor

String Constructor

Using a Constructor will force two Strings with the same text to refer to different items in memory.

Caveat: unless -XX:+UseStringDeduplication is enforced at the JVM level (say, as an arg) or if intern() isn't called.

String a = "abc";
String b = new String("abc");
// Not the same memory address

String c = "abc";
// Strings a and c have the same text value and the same memory address

Comparison

Remember the following:

String x = "abc";
String y = "abc";
System.out.println(x == y); // true

String z = new String("abc");
System.out.println(x == z); // false

JVM Fine-Tuning

-XX:StringTableSize=4901

The maximum StringTableSize Java 11+ is now 65536.

Note: the -X and -XX affixes indicate non-standard and potentially unstable "power user" JVM parameters settings, respectively.

StringBuilder

Using + in a (large) loop will still (in Java 18) result in wonkiness:

  1. Inefficient from a time-complexity standpoint.
  2. The underlying implementation (automatic conversion between + and StringBuilder) is imperfectly performant for large cases.
  3. Will actually be indeterministic (as I found out). And interestingly in at least two ways:
    • By machine.
    • With identical code blocks being run sequentially.

Consider generating a String hash for comparing Sets of Arrays:

Set<int[]> exampleA = new HashSet<>();
exampleA.add(new int[]{1, 2});
exampleA.add(new int[]{3, 4});
exampleA.add(new int[]{5, 6, 7});

String stringHashA = "";

for (int[] entry : exampleA) {
    stringHashA += "{";
    for (int i = 0; i < entry.length; i++) {
        stringHashA += entry[i];
        if (i < entry.length - 1) stringHashA += ",";
    }
    stringHashA += "}";
}

Set<int[]> exampleB = new HashSet<>();
exampleB.add(new int[]{1, 2});
exampleB.add(new int[]{3, 4});
exampleB.add(new int[]{5, 6, 7});

String stringHashB = "";

for (int[] entry : exampleB) {
    stringHashB += "{";
    for (int i = 0; i < entry.length; i++) {
        stringHashB += entry[i];
        if (i < entry.length - 1) stringHashB += ",";
    }
    stringHashB += "}";
}

System.out.println(stringHashA);
System.out.println(stringHashB);
System.out.println(stringHashA.equals(stringHashB));
// stringHash will be indeterministic with respect to the two identical code blocks
/*
    {5,6,7}{3,4}{1,2} - this is deterministic and will always print this on same machine
    {5,6,7}{1,2}{3,4} - this is deterministic and will always print this on same machine
    false
*/
/*
    {3,4}{1,2}{5,6,7} - on a different machine
    {5,6,7}{1,2}{3,4} - on a different machine
    false
*/
Set<int[]> exampleA = new HashSet<>();
exampleA.add(new int[]{1, 2});
exampleA.add(new int[]{3, 4});
exampleA.add(new int[]{5, 6, 7});

StringBuilder stringHashA = new StringBuilder();
for (int[] entry : exampleA) {
    stringHashA.append("{");
    for (int i = 0; i < entry.length; i++) {
        stringHashA.append(entry[i]);
        if (i < entry.length - 1) stringHashA.append(",");
    }
    stringHashA.append("}");
}

Set<int[]> exampleB = new HashSet<>();
exampleB.add(new int[]{1, 2});
exampleB.add(new int[]{3, 4});
exampleB.add(new int[]{5, 6, 7});

StringBuilder stringHashB = new StringBuilder();
for (int[] entry : exampleB) {
    stringHashB.append("{");
    for (int i = 0; i < entry.length; i++) {
        stringHashB.append(entry[i]);
        if (i < entry.length - 1) stringHashB.append(",");
    }
    stringHashB.append("}");
}

System.out.println(stringHashA.toString());
System.out.println(stringHashB.toString());
System.out.println(stringHashA.toString().equals(stringHashB.toString()));
// stringHash will be indeterministic with respect to the two identical code blocks
/*
    {5,6,7}{3,4}{1,2} - this is deterministic and will always print this on same machine
    {3,4}{1,2}{5,6,7} - this is deterministic and will always print this on same machine
    false
*/
/* 
    {3,4}{1,2}{5,6,7} - on a different machine
    {1,2}{3,4}{5,6,7} - on a different machine
    false
*/

However:

Set<int[]> result = new HashSet<>();
result.add(new int[]{1, 2});
result.add(new int[]{3, 4});
result.add(new int[]{5, 6, 7});

StringBuilder stringHashResult = new StringBuilder();
for (int[] entry : result) {
    stringHashResult.append("{");
    for (int i = 0; i < entry.length; i++) {
        stringHashResult.append(entry[i]);
        if (i < entry.length - 1) stringHashResult.append(",");
    }
    stringHashResult.append("}");
}

System.out.println(resultStringHash.toString());
// stringHash will be deterministic on same machine but will generate different results otherwise

Refer to: https://stackoverflow.com/questions/11942368/why-use-stringbuilder-explicitly-if-the-compiler-converts-string-concatenation-t

  1. https://www.baeldung.com/java-string-pool
  2. https://stackoverflow.com/questions/11942368/why-use-stringbuilder-explicitly-if-the-compiler-converts-string-concatenation-t

Java: Important Keywords

  1. final - can't be changed as a variable, can't be Overridden as a method, and can't be subclassed as a Class.
  2. static
    • On a method: belongs to the Class not an instance or Object.
    • On a Class Field: the Value is shared between all Instances (akin to the @@ Class Variable in Ruby.)
    • static block allows for multiline initializations of static Class Variables.
  3. sealed - specifies that only certain Classes can inherit from or implement from the sealed Class or Interface.
  4. abstract - on a Class: can extend but not instantiate, can Override methods of the inherited Abstract Class.
  5. synchronized - enforce thread safety within the specified Object or Class.
  6. transient - ignore value, set a default value, and save or read from a serialized file.
  7. record - syntactic sugar to specify an immutable POJO / DTO.
  8. volatile - ignores CPU and memory caching and allows the actual CPU instruction order to be preserved. (Due to the nature of multi-threaded CPU's, instructions, values, etc. can be incorrectly handled by the CPU cache. In those circumstances it can be beneficial to side-step caching entirely.)
  1. https://blogs.oracle.com/javamagazine/post/java-record-instance-method
  2. https://blogs.oracle.com/javamagazine/post/java-quiz-sealed-type-records
  3. https://www.baeldung.com/java-sealed-classes-interfaces
  4. https://www.digitalocean.com/community/tutorials/java-records-class
  5. https://blogs.oracle.com/javamagazine/post/java-sealed-types-subtypes-final
  6. https://www.baeldung.com/java-static
  7. https://www.baeldung.com/java-volatile

Code Samples:

  1. https://github.com/Thoughtscript/java-refresh-notes
  2. https://github.com/Thoughtscript/java-refresh-notes/blob/master/src/io/thoughtscript/refresh/sealed/SealedExample.java
  3. https://github.com/Thoughtscript/java-refresh-notes/blob/master/src/io/thoughtscript/refresh/records/RunRecordExample.java

Java: Security

Java String over Character Array?

Strings remain in Heap for an unspecified amount of time in between Garbage Collecting. As such, using a Character Array can be a better choice since one can alter the contents of each index of the Array at any time between Garbage Collection events.

So, it can be a great choice for sensitive PII data that's in-memory.

Transient Serialization

When one uses the transient keyword, default values are Persisted and the original values are stored in a separate file. (This was often used for PII since it separates sensitive data into two parts that have to be reassembled.)

Scanners and Tools

  1. https://owasp.org/www-project-dependency-check/
  2. https://snyk.io/lp/java-snyk-code-checker/
  3. https://www.azul.com/
  1. https://www.interviewbit.com/java-interview-questions/
  2. https://www.benchresources.net/serializing-a-variable-with-transient-modifier-or-keyword/

Java: Generics

Generics abstract or generalize a method so that the specific details of the argument types are not required.

By convention:

/**
    Type Parameters:
    E - Element (used extensively by the Java Collections Framework)
    K - Key
    N - Number
    T - Type
    V - Value
    S,U,V etc. - 2nd, 3rd, 4th types
*/

Type Generics

public class KeyedTreeMap<T> implements Map<KeyedTreeMap<T>> {
    //...
}

Type Generics apply to Reference Types, not Primitive Types.

Bounded Type Parameters

We can require that a generic be constrained to being a certain type:

public static <X extends Integer> void exampleMethod(X[] genericArr) {
    //...
}
public interface Pair<K, V> {
    public K getKey();
    public V getValue();
}

public class ExamplePair<K, V> implements Pair<K, V> {
    private K key;
    private V value;

    public ExamplePair(K key, V value) {
        this.key = key;
        this.value = value;
    }

    public K getKey()    { return key; }
    public V getValue() { return value; }
}

//Instantiations of the ExamplePair class:
ExamplePair<String, Integer> p1 = new ExamplePair<String, Integer>("Even", 8);
ExamplePair<String, String>  p2 = new ExamplePair<String, String>("hello", "world");

Java: Techniques

Algo Quick Reference

Common operations specific to algorithm implementation:

// length, length(), size(), count()
myArray.length;
myString.length();
myList.size();
myStream.count();

// List
myListA.addAll(myListB); // Merge two Lists
myListA.add(0, myObject); // Prepend, unshift - adds an element to the beginning
myListA.add(myIdx, myObject); // Adds an element at index
myListA.pop(); // Pop - removes last element and returns
myListA.remove(myIdx); // Remove element at index
myListA.sort((a,b) -> b - a); // Custom sort with Lambda Comparator
myListA.equals(myListB); // Comparison

// Conversion
List<Integer> listA = List.of(myArray); // Convert Array to List
List<Integer> listB = Arrays.asList(myArray); // Convert Array to List
Integer[] arrA = listA.toArray(new Integer[0]); // Convert List to Array
List<Integer> myStream = myList.stream(); // Convert List to Stream
List<Integer> listA = myStream.toList(); // Convert Stream to List
Stream<Integer> myStream = Arrays.stream(myArr); // Convert Array to Stream
Integer[] myArr = myStream.toArray(Integer[]::new); // Convert Stream to Array

// Maps
myMap.put(myKey, myValue); // Add Key Value pair to Map
myMap.containsKey(myKey); // Check if Key exists in Map without traversal
myMap.get(myKey); // Get Value using Key - Null if not found

// Chars
Character.isDigit(myChar); // Check if char is 0-9
Character.isLetter(myChar); // Check if char is letter
Character.isLetterOrDigit(myChar);  // Check if char is letter or 0-9
Character.isUpperCase(myChar); // Check if is uppercase
char myChar = myString.charAt(myIdx); // Get char at idx of String
String myStr = String.valueOf(myChar); // char to String
String myStr = new String(new char[]{'a', 'b', 'c'}); // chars to String

// Strings
String[] start = myString.split(""); // Split String
myString.substring(inclusiveIdx, exclusiveIdx); // Substring
myString.contains(otherString); // Substring match within

// Arrays
int[][] deepCopy = Arrays.copyOf(myArray, myArray.length); // Deep copy an Array
Arrays.sort(myArray); // Sort ascending
Arrays.sort(myArray, Collections.reverseOrder()); // Sort descending
Arrays.sort(myArray, (a, b) -> b - a); // Custom sort with Lambda Comparator

// Reference to Primitive Type
myInteger.intValue(); // Get int from Integer
myCharacter.charValue(); // Get char from Character

// int
int M = heap.size() / 2; // Same as int M = (int) (heap.size() / 2); which is the same as calling Math.floor(...) to drop decimals values.

URLConnection

Basic GET Request.

import java.io.BufferedReader;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.net.URL;
import java.net.URLConnection;

//...

try {
  URL u = new URL("https://jsonplaceholder.typicode.com/todos/1");
  URLConnection conn = u.openConnection();

  // Get the InputStream and enable reading from it
  InputStream stream = conn.getInputStream();
  // https://docs.oracle.com/javase/8/docs/api/java/io/InputStreamReader.html
  // "bridge from byte streams to character streams"
  InputStreamReader isr = new InputStreamReader(stream);
  // Convert the read data into a usable format.
  // https://docs.oracle.com/javase/8/docs/api/java/io/BufferedReader.html
  // Reads text from a character-input stream.
  BufferedReader in = new BufferedReader(isr);

  StringBuilder result = new StringBuilder();
  String inputLine;

  while ((inputLine = in.readLine()) != null) {
    result.append(inputLine);
  }

  System.out.println(result);
  in.close();
  stream.close();
  isr.close();
  inputLine = null;
  result = null;

} catch (Exception e) {
  //...
}
{  "userId": 1,  "id": 1,  "title": "delectus aut autem",  "completed": false}

https://docs.oracle.com/javase/tutorial/networking/urls/readingWriting.html and https://www.geeksforgeeks.org/java-net-urlconnection-class-in-java/

String to Date

SimpleDateFormat formatter = new SimpleDateFormat("yyyy-MM-dd");
Date nextDate = formatter.parse("2024-01-25");

Date to Calendar

Useful to know this conversion since Calendar supports basic day and week addition operations.

SimpleDateFormat formatter = new SimpleDateFormat("yyyy-MM-dd");
Date nextDate = formatter.parse("2024-01-25");
Calendar c = Calendar.getInstance();
c.setTime(nextDate);

char to int

Remember that merely casting a char to an int (recommended/encouraged by IntelliJ) will actually produce the wrong conversion:

String strNum = "12345";
char c = strNum.charAt(0);
int cc = (int) c;
int ccc = Character.getNumericValue(c);
System.out.println(cc == ccc); // false

Note that Integer.parseInt() will also accomplish the correct conversion along with Character.getNumericValue():

int ccc = Character.getNumericValue(c);
int cccc = Integer.parseInt(String.valueOf(c));
System.out.println(ccc == cccc); // true

length, size, and count

Check specific Character in String:

myArray.length;
myString.length();
myList.size();
myStream.count();

array slicing

String[] L = new String[] { "Python", "Java", "Kotlin", "Scala", "Ruby", "Go", "Rust" };
String[] R = Arrays.copyOfRange(L, 1, 4);

Refer to: https://www.baeldung.com/java-slicing-arrays

chars in String

Check specific Character in String:

char c = str.charAt(0);
String s = Character.toString(c);
boolean beginClosed = s.equals("[");

Sort an Array

2D:

int[][] intervals = [[1,2],[3,4],[5,6],[7,8]];

Arrays.sort(intervals, (a,b) -> {
    if (a[0] < b[0]) return -1;
    if (b[0] < a[0]) return 1;
    return 0;
});

1D:

int[] nums = [6,5,4,3,8,1];

Arrays.sort(nums);

Swap List Indicies

int l = 0;
int r = 5;

while (l < r) {
    Collections.swap(listInt, l, r);
    l++;
    r--;
}

Java Mail

To configure, install, and use Java Mail:

<!-- pom.xml -->
<!-- Java Email Dependencies -->
<dependency>
    <groupId>com.sun.mail</groupId>
    <artifactId>javax.mail</artifactId>
    <version>1.5.5</version>
</dependency>
# application.yml
spring:
  # Enable auth
  mail:
    host: smtp.gmail.com
    port: 587
    username: aaaaaaa
    password: aaaaaaa
    properties:
      mail:
        debug: true
        transport:
          protocol: smtp
        smtp:
          auth: true
          starttls:
            enable: true
package io.thoughtscript.example.services;

import io.thoughtscript.example.Constants;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.mail.SimpleMailMessage;
import org.springframework.mail.javamail.JavaMailSender;
import org.springframework.stereotype.Service;

@Slf4j
@Service
public class EmailService {

    @Autowired
    JavaMailSender javaMailSender;

    public void sendMagicEmail(String emailAddress, String token) {
        SimpleMailMessage email = new SimpleMailMessage();

        StringBuffer emailContent = new StringBuffer();
        emailContent.append(Constants.EMAIL_MAGIC_LINK_GREETING);
        emailContent.append(Constants.AUTH_LOGIN_ENDPOINT_FULLY_QUALIFIED);
        emailContent.append("?token=");
        emailContent.append(token);
        emailContent.append("&email=");
        emailContent.append(emailAddress);

        email.setFrom("testapp@test.com");
        email.setTo(emailAddress);
        email.setText(emailContent.toString());
        email.setSubject("test");

        log.info(email.getTo()[0]);
        log.info(email.getText());
        log.info(email.getSubject());

        javaMailSender.send(email);
    }
}

Refer to: https://www.digitalocean.com/community/tutorials/javamail-example-send-mail-in-java-smtp

Convert Time between TimeZones

import java.time.*;

//...
ZonedDateTime firstTime = ZonedDateTime.of(
  LocalDateTime.of(
    LocalDate.of(2000, 1, 26),
    LocalTime.of(19, 59)
  ),
  ZoneId.of("America/Halifax")
);

ZonedDateTime secondTime = firstTime.withZoneSameInstant(ZoneId.of("Asia/Makassar"));
DateTimeFormatter formatter = DateTimeFormatter.ofPattern("dd-MM-yyyy HH:mm");
String result = secondTime.format(formatter);

https://www.codewars.com/kata/605f7759c8a98c0023833718/train/java

  1. https://docs.oracle.com/javase/tutorial/networking/urls/readingWriting.html
  2. https://www.geeksforgeeks.org/java-net-urlconnection-class-in-java/
  3. https://www.digitalocean.com/community/tutorials/javamail-example-send-mail-in-java-smtp
  4. https://www.baeldung.com/java-slicing-arrays

Java: Asynchronous Programming

Java Threads

  1. Java Threading is mapped to underlying System Threads by default (logical or physical CPU cores).
  2. Java Threads can be pooled to help ensure that the application's needs don't overwhelm the available physical resources.
  3. Java Thread Pooling relies heavily on the Executors and Runnable implementations:
    • Runnable - implementations of Runnable can be passed into Executors to be run automatically.
    • ExecutorService - provides factories for defining Thread Pool sizes on the fly or programmatically.
    • ThreadPoolExecutor - use for fine tuning a Thread Pool.
    • ScheduledExecutorService - for executing a task at a specific time.

Examples:

ExecutorService executor = Executors.newFixedThreadPool(10);
ExecutorService nonBlockingService = Executors.newSingleThreadExecutor();
nonBlockingService.execute(() -> {
    jmsClient.sendObject();
});
public class WorkerThread implements Runnable {
    @Override
    public void run() {//...}
}

ThreadPoolExecutor executorPool = new ThreadPoolExecutor(//...);
executorPool.execute(new WorkerThread());

Refer to: https://www.baeldung.com/java-executor-service-tutorial and https://www.javatpoint.com/executor-framework-java

Futures vs CompletableFutures

  1. Futures
    • Blocking.
    • Can't combine results from more than one Thread.
  2. CompletableFutures
    • Added in Java 8 and significantly improved in Java 9.
    • Chainable methods allow for JavaScript Promise-like resolve(), reject(), and promiseAll() behavior.
    • Can combine results from multiple Threads.
    • Non-blocking.

Asynchronous Promises

  1. Resolving and using an Asynchronous Objecct: CompletableFuture.supplyAsync(() -> { //... }).thenApplyAsync(result -> { //... })
if (openCsvService.validate(file)) {
    CsvTransfer csvTransfer = openCsvService.setPath(file);
    Instant start = TimeHelper.start(Constants.ALL);

    return CompletableFuture.supplyAsync(() -> {
        try {
            CompletableFuture<String> save = openCsvService.saveCsv(file);
            responseTransfer.setMessage(save.get());
        } catch (Exception e) {
            responseTransfer.setMessage(CSV_UPLOAD);
        }
            return responseTransfer;
        }).thenApplyAsync(result -> {
            try {
                CompletableFuture<List<String[]>> allPromise = openCsvService.readCsvAll(csvTransfer);
                result.setCsv(allPromise.get());
            } catch (Exception e) {
                responseTransfer.setMessage(GENERIC_EXCEPTION);
            }
            result.setPerformance(TimeHelper.stop(start));
             return result;
        });
    }

    responseTransfer.setMessage(CSV_VALIDATION_EXCEPTION);
    return CompletableFuture.completedFuture(responseTransfer);
}

Thread Safety

  1. synchronized - enforce thread safety within the contents of a Class or Object.
    • On a Static method, thread safety is enforced on the Class.
    • On a Non-Static method, thread safety is enforced on the Object.
  2. Atomic Objects have inherently Synchronized values.
  3. Refer to the article on Thread-Safe Singletons.
  4. Refer to the article on Computer Science Processes and Threads.
  5. Java provides Semaphores and Locks:
    • ReentrantLocks allow a single Thread to (re)access a resource at a time.
    • Semaphores allow a specified number of Threads to share access to a resource at any given time.
  6. Wait() vs Sleep():
    • Wait() - belongs to Object, Non-Static, should only be called from a Synchronized context.
    • Sleep() - belongs to Thread, Static,
  1. https://www.javatpoint.com/executor-framework-java
  2. https://www.baeldung.com/java-executor-service-tutorial
  3. https://blogs.oracle.com/javamagazine/post/java-concurrent-synchronized-threading
  4. https://www.geeksforgeeks.org/difference-between-wait-and-sleep-in-java/

Java: SOAP

Producer

  1. Generates Classes from an .xsd file.
  2. Provides Web Services via the @EnableWs configuration annotation and @Endpoint handler annotation.
  3. Makes a WSDL accessible to SOAP clients so relevant Classes can be generated client-side.
<?xml version="1.0" encoding="UTF-8"?>
<schema xmlns="http://www.w3.org/2001/XMLSchema"
        targetNamespace="/jaxb/gen"
        xmlns:examples="/jaxb/gen"
        elementFormDefault="qualified">

    <element name="getComplexExampleRequest" type="examples:GetComplexExampleRequest"></element>
    <element name="getComplexExampleResponse" type="examples:GetComplexExampleResponse"></element>
    <element name="complexExample" type="examples:ComplexExample"></element>
    <element name="simpleStringEnumExample" type="examples:SimpleStringEnumExample"></element>

    <complexType name="GetComplexExampleRequest">
        <sequence>
            <element name="exampleId" type="int"/>
        </sequence>
    </complexType>

    <complexType name="GetComplexExampleResponse">
        <sequence>
            <element name="complexExample" type="examples:ComplexExample"/>
            <element name="name" type="string"/>
            <element name="gender" type="string"/>
            <element name="created" type="dateTime"/>
        </sequence>
    </complexType>

    <complexType name="ComplexExample">
        <sequence>
            <element name="exampleId" type="int"/>
            <element name="description" type="string"/>
            <element name="stringEnum" type="examples:SimpleStringEnumExample"/>
        </sequence>
    </complexType>

    <simpleType name="SimpleStringEnumExample">
        <restriction base="string">
            <enumeration value="HELLO"/>
            <enumeration value="SUCCESS"/>
            <enumeration value="FAIL"/>
        </restriction>
    </simpleType>
</schema>
@EnableWs
@Configuration
public class SoapConfig extends WsConfigurerAdapter {

    @Bean
    public ServletRegistrationBean messageDispatcherServlet(ApplicationContext applicationContext) {
        MessageDispatcherServlet servlet = new MessageDispatcherServlet();
        servlet.setApplicationContext(applicationContext);
        servlet.setTransformWsdlLocations(true);
        return new ServletRegistrationBean(servlet, "/ws/*");
    }

    @Bean(name = "complexExample")
    public DefaultWsdl11Definition defaultWsdl11Definition(XsdSchema exampleSchema) {
        DefaultWsdl11Definition wsdl11Definition = new DefaultWsdl11Definition();
        wsdl11Definition.setPortTypeName("ComplexExamplePort");
        wsdl11Definition.setLocationUri("/ws");
        wsdl11Definition.setTargetNamespace(URI);
        wsdl11Definition.setSchema(exampleSchema);
        return wsdl11Definition;
    }

    @Bean
    public XsdSchema exampleSchema() {
        return new SimpleXsdSchema(new ClassPathResource(XSD));
    }
}
@Slf4j
@Endpoint
public class ComplexExampleEndpoint {

    @Autowired
    ComplexExampleRepository complexExampleRepository;

    @PayloadRoot(namespace = URI, localPart = "getComplexExampleRequest")
    @ResponsePayload
    public GetComplexExampleResponse getCountry(@RequestPayload GetComplexExampleRequest request) {
        GetComplexExampleResponse response = new GetComplexExampleResponse();
        response.setComplexExample(complexExampleRepository.findExample(request.getExampleId()));
        return response;
    }
}

Consumer

  1. Generates Classes client-side by accessing the hosted WSDL.
  2. Invokes calls against the Web Service using the specified WSDL Class schemas.
curl --header "content-type: text/xml" -d @curl-request.xml http://localhost:8080/ws
<!-- curl-request.xml -->
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"
                  xmlns:examples="https://github.com/Thoughtscript/java_soap_wsdl_2023">
   <soapenv:Header/>
   <soapenv:Body>
      <examples:getComplexExampleRequest>
         <examples:exampleId>1</examples:exampleId>
      </examples:getComplexExampleRequest>
   </soapenv:Body>
</soapenv:Envelope>
  1. https://spring.io/guides/gs/producing-web-service/
  2. https://spring.io/guides/gs/consuming-web-service/
  3. https://github.com/spring-guides/gs-consuming-web-service
  4. https://www.baeldung.com/spring-boot-soap-web-service
  5. https://www.baeldung.com/jaxb

Code samples:

  1. https://github.com/Thoughtscript/java_soap_wsdl_2023

Java: Text Editors

Troubleshooting Major-Minor Versions

  1. Set the correct SDK
    • File > Project Structure > Project Settings > Project > SDK
    • File > Project Structure > Project Settings > Project > Language Level
  2. Check the Platform Settings
    • File > Project Structure > Platform Settings > SDKs
  3. Check the pom.xml
    • <properties>
    • <maven.compiler.source>
    • <maven.compiler.target>
  4. Check the Run and Debug Configurations
    • In the Upper-Right Corner
    • Select the Run/Debug Configurations dropdown > Edit Configurations... > Build and run

Kotlin: General Concepts

Improvements

Kotlin was introduced as a Java ecosystem language variant with the following improvements:

  1. More concise (less verbose) syntax:
    • function to fun.
    • unit return Type for void Methods.
    • : replaces both extends and implements.
    • new Constructor keyword isn't required.
  2. Greater Null Safety and Safe Navigation support:
    • Elvis operator.
    • ? Nullable Type suffix.
    • null and Nothing Type.
    • Nullable disallowed by default.
  3. Functional programming paradigm oriented from the beginning:
    • Functions can be free-standing and don't have to be attached to or declared within a Class.
    • Functions don't have to be Methods.
    • Tail Recursion optimization.
  4. Many performance improvements:
    • Asynchronous light-weight Coroutines pre-Java 21.
    • Type inferencing.
    • Compilation.
    • Singleton declaration.
  5. Greater flexibility with Class and Variable declarations:
    • val (constant, replaces final, immutable) and var (mutable).
    • Constructors in Class definition.
    • public visibility by default.
    • static keyword removed.
    • open must be explicitly set in Concrete Class definitions to allow Subclassing.
    • Keyword support for companion object and object Singletons.
    • lateinit lazy Variable initialization.
  1. https://shirsh94.medium.com/top-100-kotlin-interview-questions-and-answers-d1f6785f336a

Code samples:

  1. https://github.com/Thoughtscript/kotlin_2024/blob/main/kotlin/src/main/kotlin/io/thoughtscript/example/language/Function.kt

Kotlin: Classes and Types

Types Equality

Unlike Java, Kotlin equality checks mirror JavaScript supporting both "looser" and "stricter" degrees of comparison:

  1. == (Structural equality) compares by Values.
  2. === (Referential equality) compares by same Pointer/Memory Address.
  3. null has Nothing Type.

Consult https://kotlinlang.org/docs/equality.html

Classes

  1. Constructors are parameterized on Class definitions. (Similar to GoLang struct.)
  2. Classes aren't subclassable (final) by default: use open to override (Interfaces and Abstract Classes are).
  3. implements and extends are both replaced by :.
  4. Interfaces have some features that Abstract Classes do in Java (can define some Getters/Setters). Otherwise, Interfaces define Method signatures (Abstract Methods) and can't be initialized (Abstract Classes can define state and have initialized fields) like usual.
  5. public is the default visibility.
// By default all Classes can't be inherited from.
// Add the open keyword to allow that.

// Note the constructor in parentheticals
open class ExampleA(a: String, b: String) {
    var stringVariable: String = a
    val stringValue: String = b

    // Method - Function on Class
    fun testA(): Unit {
        println("stringVariable: $stringVariable, stringValue: $stringValue")
    }
}

class ExampleB(a: String, b: String, c: String): ExampleA(a, b) {
    var additionalStringVariable: String = c

    fun testB(): Unit {
        println("stringVariable: $stringVariable, stringValue: $stringValue, additionalStringVariable: $additionalStringVariable")
    }

    // Inner class
    inner class ExampleC(d: String) {
        val additionalStringValue: String = d
        fun testC(): Unit {
            println("stringVariable: $stringVariable, stringValue: $stringValue, additionalStringVariable: $additionalStringVariable, additionalStringValue: $additionalStringValue")
        }
    }
}

open class ImplicitGettersAndSetters() {
    var stringVariable: String = ""
    val stringValue: String = "I'm set already as a val"

    // Method - Function on Class
    fun getFields(): Unit {
        println("stringVariable: $stringVariable, stringValue: $stringValue")
    }
}
  1. https://kotlinlang.org/docs/equality.html

Code samples:

  1. https://github.com/Thoughtscript/kotlin_2024/blob/main/kotlin/src/main/kotlin/io/thoughtscript/example/language/Classes.kt
  2. https://github.com/Thoughtscript/kotlin_2024/blob/main/kotlin/src/main/kotlin/io/thoughtscript/example/language/AbstractClass.kt
  3. https://github.com/Thoughtscript/kotlin_2024/blob/main/kotlin/src/main/kotlin/io/thoughtscript/example/language/Interfaces.kt

Kotlin: Variables

Variable Declaration

  1. val specifies an immutable Variable (constant, rather than final).
  2. var specifies a mutable Variable.

Destructing

Kotlin supports Destructuring the fields of an Object into separate Variables:

val (name, age) = person

Lazy Initialization

lateinit allows Variables to be initialized lazily. They must be initialized before they're used:

fun exampleA(): Unit {
    // Can't be val
    lateinit var name: String  // Can't be initialized
    name = "I'm initialized" // Placeholder for lengthy or asynchronous assignment
    println("I'm initialized with $name")
}

Classes support Receivers, ::, and the .isInitialized field:

class Example() {
    lateinit var name: String  // Can't be initialized

    // Method - Function on Class
    fun isInitializedExample(): Unit {
        name = "I'm initialized" // Placeholder for lengthy or asynchronous assignment

        // Supports :: and .isInitialized in Classes
        if (::name.isInitialized) {
            println("Initialized value present: $name")

        } else {
            println("var name is not initialized yet")

        }
    }
}
  1. https://kotlinlang.org/docs/properties.html#late-initialized-properties-and-variables
  2. https://www.baeldung.com/kotlin/checking-lateinit

Kotlin: Null Safety

Kotlin supports advanced Null Safety features.

Nullable

By default, Types aren't Nullable unless they're explicitly declared so with a suffixed ?:

// From the Official Documentation
var a: String = "abc" // Regular initialization means non-nullable by default
a = null // compilation error
// From the Official Documentation
var b: String? = "abc" // can be set to null
b = null // ok

Read more here: https://kotlinlang.org/docs/null-safety.html#checking-for-null-in-conditions

Elvis Operator

Kotlin supports the Ruby/Rails Type Safety Navigation:

  1. Elvis operator ? (akin to Ruby/Rails).

  2. .let() (handles non-null), .run() handles null

    fun chainedSafeElvis(arg: Any?): Unit {
    
    // if null then execute block
    arg?.let {
       // if not-null then execute this block
       // with reference to obj via 'it'
       println("chainedSafeElvis: $it")
    
    } ?:run {
       println("chainedSafeElvis: Null found")
    }
    }
  1. https://kotlinlang.org/docs/null-safety.html#checking-for-null-in-conditions

Code samples:

  1. https://github.com/Thoughtscript/kotlin_2024/blob/main/kotlin/src/main/kotlin/io/thoughtscript/example/language/SafeNavigation.kt

Kotlin: Coroutines

Kotlin supports Asynchronous operations and light-weight Threading.

Suspend Functions

suspend indicates that a Function can be used in either Asynchronous or blocking operations.

They can be called, paused, or stopped.

// function to be launched
// suspend indicates that the Function can be called in async operations, paused, or stopped
suspend fun suspendedFunctionExampleA(text: String): String {
    println("I'm in the returned suspended function with $text")
    return "I'm done: $text"
}

suspend fun suspendedFunctionExampleB(text: String) = coroutineScope {
    println("I'm in the suspended function with no return: $text")
    // No return
}

Coroutines

Coroutines, suspend functions, etc. provided support for lightweight virtual threading.

// https://kotlinlang.org/docs/coroutines-overview.html
// Coroutines are Kotlin's approach to Threading and Asynchronous operations
// Akin to Java's Lightweight Virtual Threads

// See also Scope builder notation: https://kotlinlang.org/docs/coroutines-basics.html#scope-builder
fun spawnExecExample() = runBlocking {

    println("I'm in the blocking scope")

    // Define a Channel like in go
    val channel = Channel<String>()

    // Declare a coroutine block (like a go routine)
    // Spawns a light-weight virtual thread
    // is a Job

    // Use GlobalScope.launch now
    GlobalScope.launch {
        println("I'm in a coroutine")

        // https://kotlinlang.org/docs/composing-suspending-functions.html#concurrent-using-async
        // Conceptually very similar to Launch except it's deferred and can use .await()
        val resultA = async { suspendedFunctionExampleA("exampleA") }.await()
        println("I'm awaited at resultA: $resultA")

        // Concurrent launching - not blocking!
        val resultB = async { suspendedFunctionExampleA("exampleB") }.await()
        val resultC = async { suspendedFunctionExampleA("exampleC") }.await()
        // The documention shows an example where async operations are composed
        // but using the .await() is required to print a returned a value like in the below

        async { suspendedFunctionExampleB("exampleD") }

        // IPC send to channel
        async {
            for (x in 1..10) channel.send("$x")
            channel.close() // close channel
        }

        // Can compose the above without using promises
        println("I'm composed: $resultA, $resultB, $resultC")
    }

    for (y in channel) println("Channel received: $y")
    // Commenting out this line will prevent the blocking scope from printing everything!!
}

https://github.com/Thoughtscript/kotlin_2024/blob/main/kotlin/src/main/kotlin/io/thoughtscript/example/language/Asynchronous.kt

Apparently, these are still more performant than (Java 21) Virtual Threads: https://medium.com/@AliBehzadian/java-virtual-threads-performance-vs-kotlin-coroutines-f6b1bf798b16

  1. https://medium.com/@AliBehzadian/java-virtual-threads-performance-vs-kotlin-coroutines-f6b1bf798b16

Code samples:

  1. https://github.com/Thoughtscript/kotlin_2024/blob/main/kotlin/src/main/kotlin/io/thoughtscript/example/language/Asynchronous.kt

Kotlin: Handling Static Methods and Singletons

Static Keyword

  1. No static Keyword.
    • init block instead of static blocks for complex initializations.
  2. companion object can be used to define the equivalent of static Methods.

Singletons

Singletons can be declared through a single keyword: object:

object Singleton {
    fun doSomething() = "Doing something"
}

Consult: https://www.baeldung.com/kotlin/singleton-classes

Companion Objects

  1. companion object can be used to define the equivalent of static Methods.
  2. Only one companion instance is present at any given time (object) and allows methods to be called without declaring a new instance of the Class.
class ReadRow {
    companion object {
        @JvmStatic
        fun execute(conn: Connection) {
            val query = "SELECT * FROM ..."
        }
    }
}
  1. https://kotlinlang.org/docs/classes.html#companion-objects
  2. https://www.baeldung.com/kotlin/singleton-classes

Code samples:

  1. https://github.com/Thoughtscript/cockroachdb-kotlin-client/blob/master/cockroachdb-kotlin-client/src/main/kotlin/com/cockroachlabs/client/queries/ReadRow.kt

Spring: General Concepts

  1. Inversion of Control - preconfigured default values reflecting best practices and commonly used patterns.
  1. https://gitlab.com/Thoughtscript/interview_helper
  2. https://www.baeldung.com/
  3. https://spring.io/
  4. https://mkyong.com/
  5. https://github.com/spring-projects
  6. https://github.com/spring-projects/spring-boot/tree/main/spring-boot-project/spring-boot-starters
  7. https://mvnrepository.com/repos/central

Spring: Project Layout

Typical Project Layout

+- gradle
|  +- ...
|
+- build
|  +- ...
|
+- src
|  +- main
|  |  +- java
|  |  |  +- com
|  |  |     +- example
|  |  |        +- app
|  |  |           +- ...
|  |  |           +- Main.java
|  |  |
|  |  +- resources
|  |  |  +- application.properties
|  |  |  +- ...
|  |  |
|  |  +- webapp
|  |     +- resources
|  |     |  +- scripts
|  |     |     +- ...
|  |     |  +- styles
|  |     |     +- ...
|  |     +- WEB-INF
|  |     |  +- JSP
|  |     |     +- ...
|  |
|  +- test
|     +- java
|        +- com
|           +- example
|              +- app
|                 +- MainTest.java
|   
+- target
|  +- ...
|
+- pom.xml
+- gradlew.bat
+- build.gradle
+- settings.gradle 
+- .gitignore
+- README.md
+- ...

Common Maven Commands

mvn clean
mvn install
mvn spring-boot:run

Comman Gradle Commands

./gradlew clean
./gradlew build
./gradlew run

Java SSL Keytool

keytool -genkey \
  -alias interviewhelper \
  -keystore interviewhelper.p12 \
  -storetype PKCS12 \
  -keyalg RSA \
  -storepass F#4%afdfdsdfdfa*adf \
  -validity 730 \
  -keysize 4096

IntelliJ

Looks like .war artifacts don't populate immediately when a pom.xml has successfully been loaded into IntelliJ anymore.

  1. Click File > Plugins > Type in Maven Helper
  2. The Maven Helper plugin provides additional context menu options that appear to be missing now out of the box.
  3. To correctly build a .war file, right-click on pom.xml > Run Maven > Plugins > maven-war-plugin > war:war
  1. https://gitlab.com/Thoughtscript/java-reactive-pubsub
  2. https://gitlab.com/Thoughtscript/interview_helper/
  3. https://web.archive.org/web/20230128043222/https://x-team.com/blog/react-reactor-passwordless-spring/
  4. https://plugins.jetbrains.com/plugin/7179-maven-helper

Spring: Logging

  1. Slf4j - The Simple Logging Facade for Java serves as an abstraction or interface for an underlying target logging framework or library.
  2. Log4j - Apache, the original default logging library.
  3. Logback - Successor to Log4j.
  4. Log4j 2 - Apache's upgrade for Log4j that provides significant improvements over its predecessor and applies many lessons from Logback.
  5. Lombok - A utility library that provides many helpful annotations. Includes Slf4j logging as an annotation.

Common Combinations

  1. Lombok + Slf4j (included in Lombok) + Logback (included in Spring Boot Starters, the default)
  2. Slf4j + Log4j
  1. https://stackoverflow.com/questions/39562965/what-is-the-difference-between-log4j-slf4j-and-logback
  2. https://krishankantsinghal.medium.com/logback-slf4j-log4j2-understanding-them-and-learn-how-to-use-d33deedd0c46

Code samples:

  1. https://github.com/Thoughtscript/spring_boot_2023 (Spring Boot + Lombok + Slf4j + Logback)

Spring: Annotations

Common Spring-related Decorator-style annotations.

Spring MVC

  1. @PathVariable with @GetMapping("/{product_id}") to specify a URL Path Variable
  2. @RequestBody - the HTTP Request Body data. Can be Form Data. By Key-Value.
  3. @RequestParam - an HTTP Key-Value Request Paramater passed the URL string.
  4. @RestController - annotates @Controller and @ResponseBody.

Jackson, JAX

  1. @JsonNaming - allows snake case and camel case, to be used in a deserialized/serialized field.
  2. @JsonPropertyOrder - specifies the exact serialization/deserialization order to be specified (since Spring will use alphabetical order by default). Sometimes a property must be computed after another regardless of alphabetical order.
  3. @JsonIgnore - specifies that a field should be ignored during serialization. Useful for preventing infinite JSON recursion with One-to-Many, Many-to-Many, and Many-to-One table relationships.
  4. @Transient - no to be confused with the transient keyword, @Transient specifies that a field should be ignored but does not involve Serialization.

Spring

  1. @ComponentScan - specifies that the Application should recursively search for relevant Beans within the specified Package or Classpath.
  2. @SpringBootApplication - annotates @Configuration, @EnableAutoConfiguration, and @ComponentScan.
  3. @Configuration - specifies that the Bean is for configuration, contains configuration settings, Extends or Overrides some default configuration Bean, loads Application Properties, or sets them programmatically.
  4. @EnableWebMvc - Spring (but not Spring Boot) configuration annotation.
  5. @EnableAutoConfiguration - use the default auto-configured settings.
  6. @EnableAsync and @Async - enables Asynchronous programming in Spring and configures the desired Thread Executor arrangment so the @Async annotation can be added to a Bean method automatically wrapping it with an Executor. Typically used with CompletableFutures and/or Futures.
  7. @Transactional - specify that a method should be wrapped in database Transaction.
  8. @EnableRetry, @Retryable, and @Recover - enable specific method invocations to be attempted multiple times (default of three) through method intercepting in the event of a specified Exception. Remember that @Recover is used to handle Exceptions emanating from an @Retryable attempt.

Java EE

  1. @Entity - Javax Persistence annotation specifying that the POJO in question is relevant to Object Relational Mapping.
  2. @Bean - Java EE annotation indicating that the Class is a Bean (should be initialized and used as such) within some configuration Class.
  1. https://thorben-janssen.com/hibernate-tip-difference-between-joincolumn-and-primarykeyjoincolumn/
  2. https://www.baeldung.com/spring-transactional-propagation-isolation
  3. https://www.techferry.com/articles/hibernate-jpa-annotations.html
  4. https://www.baeldung.com/spring-retry
  5. https://github.com/spring-projects/spring-retry

Spring: Hibernate

  1. The Hibernate Naming Strategy divides into two basic strategies: (1) ImplicitNamingStrategy and (2) DefaultNamingStrategy. The default naming strategy can be configured in application.properties or as a configured setting. (It is not set in the Hibernate SQL Dialect.)
  2. Used with Spring Data, JPA, Entity Framework, and Javax Persistence.
  3. Object Relational Mapping framework for converting SQL data into valid Java entities in-memory at runtime.

Relationships

Relations between data structures, Rows, and Tables are specified with a mix of annotations and data-layer constraints.

Note: _fk is sometimes appended to a column below but should not be confused with a Foreign Key (Foreign Key Constraint) proper. I use that convention to make it easy to read what a column is doing - it stands for foreign-key-like or what's sometimes referred as a Foreign Keys Without a Constraint. (Best practices encourage the use of true Foreign Key Constraints in Production, DRY JOIN tables, and removing any additional Foreign Key columns.) Consult: https://ardalis.com/related-data-without-foreign-keys/ for more on the nomenclature.

Lazy Loading

Note that fetch defaults to FetchType.EAGER (associated entities are loaded immediately). FetchType.LAZY will load those entities only when the field is first used.

Generally, Hibernate encourages FetchType.LAZY: "If you forget to JOIN FETCH all EAGER associations, Hibernate is going to issue a secondary select for each and every one of those which, in turn, can lead to N+1 query issues. For this reason, you should prefer LAZY associations."

Refer to: https://docs.jboss.org/hibernate/orm/5.4/userguide/html_single/Hibernate_User_Guide.html#architecture and https://docs.jboss.org/hibernate/orm/5.4/userguide/html_single/Hibernate_User_Guide.html#fetching

One to One

Example: User and UserProfile.

@PrimaryKeyJoinColumn

@Entity
@Table(name = "A")
public class A {

  @Id
  @Column(name = "id")
  @GeneratedValue
  private int id;

  //  If a Foreign Key is explicitly defined between A to B.
  @OneToOne(cascade = CascadeType.MERGE)
  @PrimaryKeyJoinColumn
  private B b;
}
@Entity
@Table(name = "B")
public class B {

  // @GeneratedValue
  @Id
  @Column(name = "id")
  private int id;
}

@JoinColumn

@Entity
@Table(name = "A")
public class A {

  @Id
  @Column(name = "id")
  @GeneratedValue
  private int id;

  @OneToOne(fetch = FetchType.EAGER)
  @MapsId
  @JoinColumn(name = "bId")
  private B b;
}
@Entity
@Table(name = "B")
public class B {

  @Id
  @Column(name = "id")
  @GeneratedValue
  private int id;

  // Optional
  @OneToOne(mappedBy = "b", cascade = CascadeType.ALL)
  private A a;
}
DROP TABLE IF EXISTS A;
CREATE TABLE A (
  id INT(11) NOT NULL AUTO_INCREMENT,
  bId INT(11) NOT NULL
  PRIMARY KEY (id)
);


DROP TABLE IF EXISTS B;
CREATE TABLE B (
  id INT(11) NOT NULL AUTO_INCREMENT,
  PRIMARY KEY (id)
);

One To Many

Example: Owner and Pet (there are many Pets that can be owned by an Owner). And, below many A are mapped to a single B.

@Entity
@Table(name = "A")
public class A {

  @Id
  @Column(name = "id")
  @GeneratedValue
  private int id;
}
@Entity
@Table(name = "B")
public class B {

  @Id
  @Column(name = "id")
  @GeneratedValue
  private int id;

  @OneToMany(cascade = CascadeType.ALL)
  @JoinTable(name = "B_A", joinColumns = { @JoinColumn(name = "bId") }, inverseJoinColumns = { @JoinColumn(name = "aId") })
  // Alternatively, if a Foreign Key or column is used
  // instead of DRY-JOIN table.
  // @OneToMany(fetch = FetchType.LAZY, mappedBy="a_fk")
  private Set<A> manyA;
}
DROP TABLE IF EXISTS A;
CREATE TABLE A (
  id INT(11) NOT NULL AUTO_INCREMENT,
  PRIMARY KEY (id)
);


DROP TABLE IF EXISTS B;
CREATE TABLE B (
  id INT(11) NOT NULL AUTO_INCREMENT,
  PRIMARY KEY (id)
);

-- Make sure to verify the field constraints when 
----using a DRY-JOIN table that's also managed by Hibernate
DROP TABLE IF EXISTS B_A;
CREATE TABLE B_A (
  id INT(11),
  bId INT(11),
  aId INT(11)
);

Many to One

Example: Owner and Pet (there are many Pets that can be owned by an Owner). And, below many B are mapped to a single A.

Assuming an FK exists:

@Entity
@Table(name = "A")
public class A {

  @Id
  @Column(name = "A_id")
  @GeneratedValue
  private int id;
}
@Entity
@Table(name = "B")
public class B {

  @Id
  @Column(name = "id")
  @GeneratedValue
  private int id;

  @ManyToOne
  @JoinColumn(name="A_id",foreignKey=@ForeignKey(name="A_id"))
  // Alternatively, if no Foreign Key Constraints exist.
  // @JoinColumn(name="A_id")
  private A a;
}
DROP TABLE IF EXISTS A;
CREATE TABLE A (
  A_id INT(11) NOT NULL AUTO_INCREMENT,
  PRIMARY KEY (A_id)
);


DROP TABLE IF EXISTS B;
CREATE TABLE B (
  id INT(11) NOT NULL AUTO_INCREMENT,
  A_id INT(11),
  PRIMARY KEY (id)
);

Many to Many

Example: Books and Authors.

Assuming an FK exists:

@Entity
@Table(name = "A")
public class A {

  @Id
  @Column(name = "id")
  @GeneratedValue
  private int id;

  @ManyToMany
  @JoinTable(name= "A_B", 
    joinColumns = @JoinColumn(name = "aId"),
    inverseJoinColumns = @JoinColumn(name = "bId"))
  private Set<B> manyB;
}
@Entity
@Table(name = "B")
public class B {

  @Id
  @Column(name = "id")
  @GeneratedValue
  private int id;

  @ManyToMany(mappedBy="manyB")
  private Set<A> manyA;
}
DROP TABLE IF EXISTS A;
CREATE TABLE A (
  id INT(11) NOT NULL AUTO_INCREMENT
  PRIMARY KEY (id)
);


DROP TABLE IF EXISTS B;
CREATE TABLE B (
  id INT(11) NOT NULL AUTO_INCREMENT,
  PRIMARY KEY (id)
);

-- Make sure to verify the field constraints when 
----using a DRY-JOIN table that's also managed by Hibernate
DROP TABLE IF EXISTS A_B;
CREATE TABLE A_B (
  id INT(11),
  aId INT(11),
  bId INT(11)
);

Jackson JSON Serialization

To avoid infinite recursion:

  1. Use @JsonIgnore to on side side of the infinite recursion.
  2. Use a custom serializer.
  3. Use @JsonView(Views.Internal.class).

Refer to: https://www.baeldung.com/jackson-bidirectional-relationships-and-infinite-recursion

Best Practices

  1. Use fetch = FetchType.LAZY when configuring Many-to-Many, Many-to-One, and One-to-Many relationships.
  2. Prefer Foreign Key Constraints over a column that refers to a Primary Key or other unique identifier.
  3. Use some annotation like @JsonIgnore to prevent infinite JSON serialization when using Jackson.
  1. https://www.baeldung.com/hibernate-naming-strategy
  2. https://dev.to/jhonifaber/hibernate-onetoone-onetomany-manytoone-and-manytomany-8ba
  3. https://www.baeldung.com/hibernate-one-to-many
  4. https://www.baeldung.com/jackson-bidirectional-relationships-and-infinite-recursion
  5. https://www.baeldung.com/jpa-one-to-one
  6. https://docs.jboss.org/hibernate/orm/5.4/userguide/html_single/Hibernate_User_Guide.html#architecture
  7. https://docs.jboss.org/hibernate/orm/5.4/userguide/html_single/Hibernate_User_Guide.html#fetching

Code samples:

  1. https://github.com/Thoughtscript/spring_boot_2023/tree/main/server/src/main/java/io/thoughtscript/bootexample/domain
  2. https://github.com/Thoughtscript/java_stuff

Spring: Tests

  1. Spring Integration Tests are foremost characterized by the injection of the WebApplicationContext:

     @Autowired
     private WebApplicationContext wac;
    • WebApplicationContext spins up a test Application Context so Services, Controllers, etc. are called as they are (not in isolation from each other).
    • Spring Mocks will also be used in Integration Tests.
  2. Spring Unit Tests don't spin up a test WebApplicationContext and rely heavily on Spring Mocks to achieve test isolation.

Example Integration Tests

import com.thoughtscript.springunit.config.AppConfig;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.http.MediaType;
import org.springframework.test.context.ContextConfiguration;
import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;
import org.springframework.test.context.web.WebAppConfiguration;
import org.springframework.test.web.servlet.MockMvc;
import org.springframework.test.web.servlet.setup.MockMvcBuilders;
import org.springframework.web.context.WebApplicationContext;

import static org.springframework.test.web.servlet.request.MockMvcRequestBuilders.get;
import static org.springframework.test.web.servlet.result.MockMvcResultHandlers.print;
import static org.springframework.test.web.servlet.result.MockMvcResultMatchers.content;

import static org.springframework.test.web.servlet.result.MockMvcResultMatchers.jsonPath;
import static org.springframework.test.web.servlet.result.MockMvcResultMatchers.status;

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(classes = { AppConfig.class })
@WebAppConfiguration
public class ExampleControllerIntegrationTest {

    @Autowired
    private WebApplicationContext wac;

    private MockMvc mockMvc;

    @Before
    public void preTest() throws Exception {
        mockMvc = MockMvcBuilders.webAppContextSetup(this.wac).build();
    }

    @Test
    public void test() {
        try {
            // Actually calls the endpoint
            mockMvc.perform(get("/test/fetch"))
                    .andDo(print())
                    .andExpect(status().isOk())
                    .andExpect(content().contentType(MediaType.APPLICATION_JSON_UTF8))
                    .andExpect(jsonPath("$.text").value("Hello You!"));

        } catch (Exception e) {
            System.out.println("Exception: " + e);
        }
    }

    @After
    public void postTest() {
        mockMvc = null;
    }
}

Example Unit Tests

import static org.mockito.Mockito.*;
import static org.springframework.test.web.servlet.result.MockMvcResultHandlers.print;
import static org.springframework.test.web.servlet.result.MockMvcResultMatchers.content;
import static org.springframework.test.web.servlet.request.MockMvcRequestBuilders.get;
import static org.springframework.test.web.servlet.result.MockMvcResultMatchers.jsonPath;
import static org.springframework.test.web.servlet.result.MockMvcResultMatchers.status;

public class ExampleControllerUnitTest {

    private MockMvc mockMvc;

    @Mock
    private ExampleService exampleService;

    @InjectMocks
    private ExampleController exampleController;


    @Before
    public void preTest() {
        MockitoAnnotations.initMocks(this);
        mockMvc = MockMvcBuilders.standaloneSetup(exampleController).build();
    }

    @Test
    public void test() {
        try {
            // Mocks the endpoint and service
            when(exampleService.helloText()).thenReturn("Hello You!");

            mockMvc.perform(get("/test/fetch"))
                    .andDo(print())
                    .andExpect(status().isOk())
                    .andExpect(content().contentType(MediaType.APPLICATION_JSON_UTF8))
                    .andExpect(jsonPath("$.text").value("Hello You!"));

            verify(exampleService, times(1)).helloText();
            verifyNoMoreInteractions(exampleService);

        } catch (Exception e) {
            System.out.println("Exception: " + e);
        }
    }

    @After
    public void postTest() {
        mockMvc = null;
    }
}

Spring: Jupiter Tests

<?xml version="1.0"?>
<project xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"
         xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
    <modelVersion>4.0.0</modelVersion>
    <groupId>io.thoughtscript.example</groupId>
    <artifactId>spring-cucumber</artifactId>
    <packaging>jar</packaging>
    <version>1.0.0</version>
    <name>spring-cucumber</name>

    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>3.2.1</version>
    </parent>

    <properties>
        <!-- Java Version Related -->
        <java.version>18</java.version>
        <maven.compiler.target>${java.version}</maven.compiler.target>
        <maven.compiler.source>${java.version}</maven.compiler.source>

        <!-- Dependency Versions -->
        <spring.boot.version>3.2.1</spring.boot.version>
        <lombok.version>1.18.30</lombok.version>
        <cucumber.version>7.15.0</cucumber.version>
        <junit-jupiter.version>5.10.1</junit-jupiter.version>
        <junit-platform-suite.version>1.10.1</junit-platform-suite.version>
    </properties>

    <dependencies>

        <!-- Spring Starter Dependencies -->
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
            <version>${spring.boot.version}</version>
        </dependency>

        <!-- Lombok Logging Dependencies -->
        <dependency>
            <groupId>org.projectlombok</groupId>
            <artifactId>lombok</artifactId>
            <version>${lombok.version}</version>
            <scope>provided</scope>
        </dependency>

        <!-- JUnit 5 (Jupiter) Dependencies -->
        <dependency>
            <groupId>org.junit.jupiter</groupId>
            <artifactId>junit-jupiter</artifactId>
            <version>${junit-jupiter.version}</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.junit.platform</groupId>
            <artifactId>junit-platform-suite</artifactId>
            <version>${junit-platform-suite.version}</version>
            <scope>test</scope>
        </dependency>

        <!-- Cucumber Dependencies -->
        <dependency>
            <groupId>io.cucumber</groupId>
            <artifactId>cucumber-java</artifactId>
            <version>${cucumber.version}</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>io.cucumber</groupId>
            <artifactId>cucumber-spring</artifactId>
            <version>${cucumber.version}</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>io.cucumber</groupId>
            <artifactId>cucumber-junit-platform-engine</artifactId>
            <version>${cucumber.version}</version>
            <scope>test</scope>
        </dependency>

        <!-- Spring WebMvc Testing -->
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-test</artifactId>
            <scope>test</scope>
        </dependency>

        <!-- Inject Mocks Testing -->
        <dependency>
            <groupId>org.mockito</groupId>
            <artifactId>mockito-junit-jupiter</artifactId>
            <scope>test</scope>
        </dependency>
    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-surefire-plugin</artifactId>
            </plugin>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
            </plugin>

            <!-- Required for mvn spring-boot:run command -->
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-jar-plugin</artifactId>
                <configuration>
                    <archive>
                        <manifest>
                            <addClasspath>true</addClasspath>
                            <classpathPrefix>lib/</classpathPrefix>
                            <mainClass>io.thoughtscript.example</mainClass>
                        </manifest>
                    </archive>
                </configuration>
            </plugin>
        </plugins>
    </build>

</project>

Cucumber Acceptance Testing

package io.thoughtscript.example.acceptance;

import io.cucumber.spring.CucumberContextConfiguration;
import org.junit.platform.suite.api.ConfigurationParameter;
import org.junit.platform.suite.api.IncludeEngines;
import org.junit.platform.suite.api.SelectClasspathResource;
import org.junit.platform.suite.api.Suite;
import org.springframework.boot.test.context.SpringBootTest;

import static io.cucumber.junit.platform.engine.Constants.GLUE_PROPERTY_NAME;

// Looks like this assumes the root dir test/resources/...
@SelectClasspathResource("features")
// This the test package containing the actual Java Step Definition Classes
@ConfigurationParameter(key = GLUE_PROPERTY_NAME, value = "io.thoughtscript.example.acceptance")

// These are required for Cucumber to get picked up by Jupiter during maven-sure-fire.
@Suite
@IncludeEngines("cucumber")

// These are required by Spring
@CucumberContextConfiguration
@SpringBootTest()
public class CucumberAcceptanceTests {
}
package io.thoughtscript.example.acceptance;

import io.cucumber.java.en.Given;
import io.cucumber.java.en.Then;
import io.cucumber.java.en.When;
import lombok.extern.slf4j.Slf4j;

import static org.junit.jupiter.api.Assertions.assertEquals;

//These all have to be public visibility
@Slf4j
public class StepDefinitions {

    private int actual;

    //These all have to be public visibility
    @Given("some number crunching")
    public void setup() {
        log.info("prepping...");
    }

    @When("I multiply {int} and {int}")
    public void multiply(Integer x, Integer y) {
        log.debug("Multiplying {} and {}", x, y);
        actual = x * y;
    }

    @When("I add {int} {int} and {int}")
    public void tripleAddition(Integer x, Integer y, Integer z) {
        log.debug("Adding {} {} and {}", x, y, z);
        actual = x + y + z;
    }

    @Then("the result is {int}")
    public void the_result_is(Integer expected) {
        log.info("Result: {} (expected {})", actual, expected);
        assertEquals(expected, actual);
    }
}
Feature: Cucumber Spring Example

  Background: A Basic Example
    Given some number crunching

  Scenario: Multiplication
    When I multiply 4 and 5
    Then the result is 20

  Scenario: Triple Addition
    When I add 1 2 and 3
    Then the result is 6

Refer to: https://github.com/Thoughtscript/spring_cucumber/tree/main/src/test/java/io/thoughtscript/example/acceptance

Controller Tests

package io.thoughtscript.example.controllers;

import io.thoughtscript.example.services.ExampleService;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.extension.ExtendWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.autoconfigure.web.servlet.AutoConfigureMockMvc;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.test.context.junit.jupiter.SpringExtension;

import static org.junit.jupiter.api.Assertions.assertEquals;

@ExtendWith(SpringExtension.class)
@AutoConfigureMockMvc
@SpringBootTest()
class ExampleRestControllerIntegrationTest {

    private final String testString = "OK";

    @Autowired
    ExampleService exampleService;

    @Test
    void testA() {
        assertEquals(testString, exampleService.example());
    }
}
package io.thoughtscript.example.controllers;

import io.thoughtscript.example.services.ExampleService;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.extension.ExtendWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.autoconfigure.web.servlet.AutoConfigureMockMvc;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.test.context.junit.jupiter.SpringExtension;

import static org.junit.jupiter.api.Assertions.assertEquals;

@ExtendWith(SpringExtension.class)
@AutoConfigureMockMvc
@SpringBootTest()
class ExampleRestControllerIntegrationTest {

    private final String testString = "OK";

    @Autowired
    ExampleService exampleService;

    @Test
    void testA() {
        assertEquals(testString, exampleService.example());
    }
}
package io.thoughtscript.example.controllers;

import io.thoughtscript.example.services.ExampleService;
import org.junit.jupiter.api.Test;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.autoconfigure.web.servlet.WebMvcTest;
import org.springframework.boot.test.mock.mockito.MockBean;
import org.springframework.http.MediaType;
import org.springframework.test.web.servlet.MockMvc;
import org.springframework.test.web.servlet.request.MockMvcRequestBuilders;

import static org.springframework.test.web.servlet.result.MockMvcResultHandlers.print;
import static org.springframework.test.web.servlet.result.MockMvcResultMatchers.status;

@WebMvcTest(ExampleRestController.class)
class ExampleRestControllerWebMvcTest {

    //This has to be present and will be injected automatically into the ExampleRestController.
    @MockBean
    ExampleService exampleService;

    @Autowired
    private MockMvc mvc;

    @Test
    void testA() throws Exception {
        mvc.perform(MockMvcRequestBuilders
                        .get("/api/example")
                        .accept(MediaType.APPLICATION_JSON))
                .andDo(print())
                .andExpect(status().isOk());
    }

}

Refer to: https://github.com/Thoughtscript/spring_cucumber/tree/main/src/test/java/io/thoughtscript/example/controllers

Basic Tests

package io.thoughtscript.example.helpers;

import lombok.extern.slf4j.Slf4j;
import org.junit.jupiter.api.*;
import static org.junit.jupiter.api.Assertions.*;

@Slf4j
class StaticHelpersTest {

    private final String EXPECTED = "invoked";

    @BeforeAll
    // This has to be a static method
    static void init() {
        log.info("JUnit 5 Jupiter tests initializing...");
    }

    @BeforeEach
    void eachInit() {
        log.info("Running before each time...");
    }

    @Test
    // These cannot be private visibility apparently
    void testA() {
            assertEquals(EXPECTED, StaticHelpers.invoke());
    }

    @Test
    void testB() {
        assertNotNull(StaticHelpers.invoke());
    }

    @Test
    void testC() {
        assertEquals(EXPECTED.length(), StaticHelpers.invoke().length());
        assertNotEquals("incorrectString", StaticHelpers.invoke());
    }

    @AfterEach
    void eachAfter() {
        log.info("Running after each time...");
    }


    @AfterAll
    // This has to be a static method
    static void shutdown() {
        log.info("JUnit 5 Jupiter tests completed...");
    }
}

Refer to: https://github.com/Thoughtscript/spring_cucumber/tree/main/src/test/java/io/thoughtscript/example/helpers

Service Tests

package io.thoughtscript.example.services;

import lombok.extern.slf4j.Slf4j;
import org.junit.jupiter.api.*;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;

import static org.junit.jupiter.api.Assertions.*;

@Slf4j
@SpringBootTest
/*
 Apparently auto-wiring into a "basic" Jupiter test
 also requires this annotation now.
*/
public class ExampleServiceWithAutoWiringTest {
    private final String EXPECTED = "OK";

    @Autowired
    ExampleService testService;

    @Test
    void testA() {
        assertEquals(EXPECTED, testService.example());
    }
}
package io.thoughtscript.example.services;

import lombok.extern.slf4j.Slf4j;
import org.junit.jupiter.api.*;
import static org.junit.jupiter.api.Assertions.*;

@Slf4j
class ExampleServiceWithoutAutoWiringTest {

    private final String EXPECTED = "OK";

    private ExampleService testService = new ExampleService();

    @Test
    void testA() {
        assertEquals(EXPECTED, testService.example());
    }
}

Refer to: https://github.com/Thoughtscript/spring_cucumber/tree/main/src/test/java/io/thoughtscript/example/services

  1. https://github.com/Thoughtscript/spring_cucumber/

Spring: Serverless

// Recommend Typescript, JavaScript, or ECS Fargate (which deploys a complete Spring app) due to performance concerns around Spring Serverless...
  1. https://www.baeldung.com/java-aws-lambda-hibernate
  2. https://www.baeldung.com/spring-cloud-function
  3. https://www.rowellbelen.com/serverless-microservices-with-spring-boot-and-spring-data/
  4. https://github.com/eugenp/tutorials/tree/master/aws-modules/aws-lambda/lambda/src/main/java/com/baeldung/lambda

Spring: WebFlux

The Reactive paradigm attempts to address shortcomings with blocking, highly-concurrent, systems at scale.

Reactive programming introduces Back-Pressure as the gradual degradation in performance as throughput increases throughout a web service and as information moves through dependencies and internal resources (they use a "water pipe" metaphor - it can get clogged, as water throughput increases pressure increases on the whole system, etc.). Prior paradigms don't handle Back-Pressure very efficiently.

Reactive Programming Principles

  1. A preference for Functional Programming.
  2. Asynchronous Programming from the beginning, not as an afterthought.
  3. Message driven - in line with other coterminous attempts to address the concerns above (Actor-Based systems like Akka, Event Stream, and Messaging systems like Kafka, etc.).
  4. Elasticity - resources are efficiently allocated based on Back-Pressure to reduce performance degradation.

WebFlux Features

  1. Functional Routers - API endpoints that are implemented using Functional Programming.
  2. Streams API oriented.
  3. Mono and Flux as first-class citizens. Promise-like entities available right out of the box.
  4. Backing Reactor daemons to provide Multi-Threaded event loops.

Better performance in highly concurrent use cases.

Prohibitions

  1. One cannot use .block() within any reactive context. Use .subscribe() instead (it's non-blocking but will return a value as an observer).

MongoDB DocumentReferences

  1. Mongo DBRefs aren't supported in Reactive Spring Mongo DB. (One can combine the results of multiple Reactive Streams but that's tedious and unreadable.)
  2. So, use the standard Spring Mongo DB dependencies for managed nesting using the @DBRef annotation.

Refer to: https://docs.spring.io/spring-data/mongodb/docs/3.3.0-M2/reference/html/#mapping-usage.document-references and https://github.com/spring-projects/spring-data-mongodb/issues/3808

  1. https://www.baeldung.com/spring-webflux-concurrency
  2. https://www.baeldung.com/spring-mongodb-dbref-annotation
  3. https://docs.spring.io/spring-data/mongodb/docs/3.3.0-M2/reference/html/#mapping-usage.document-references
  4. https://github.com/spring-projects/spring-data-mongodb/issues/3808

Code samples:

  1. https://github.com/Thoughtscript/spring_2023

Spring: Threads

Spring Threads

  1. Spring Threading manages Threads for the entire web application.
  2. Spring Threads are utilized in processing the actual Business Logic.
  3. Spring Threads use TaskExecutor and TaskScheduler which are implementations of and wrappers for the underlying native Java Executor, ThreadPoolExecutor, etc. (Spring Executors can be used in lieu of the native Java Executors above.)
  4. Spring Executors also execute implementations of Runnable.

Enable Asynchronous task execution via:

@Configuration
@EnableAsync
@EnableScheduling
public class AppConfig {
}

Set the Spring Task Execution Pool defaults:

spring:
  task:
    execution:
      pool:
        core-size: 2
        max-size: 4

Refer to: https://docs.spring.io/spring-framework/docs/4.2.x/spring-framework-reference/html/scheduling.html

Tomcat Threads

Tomcat Threading manages inbound and outgoing web requests, connections, etc. Tomcat Threads are allocated at the "periphery" of the application - so-called Server Thread Pools that continue to be pooled/allocated even when an application itself is terminated.

Remember that many .war files might live within the same Tomcat deployment - Tomcat Threads are shared by all such deployed applications with the web container.

The configuration below (application.yml) specifies the minimum and maximum number of Threads extant in the Tomcat Thread Pool:

server:
  port: 8080
  tomcat:
    max-connections: 200
    # Tomcat thread pool
    threads:
      min: 2
      max: 4

Refer to: https://docs.spring.io/spring-boot/docs/current/reference/html/application-properties.html

  1. https://www.baeldung.com/java-web-thread-pool-config
  2. https://docs.spring.io/spring-boot/docs/current/reference/html/application-properties.html
  3. https://docs.spring.io/spring-framework/docs/4.2.x/spring-framework-reference/html/scheduling.html

Code samples:

  1. https://github.com/Thoughtscript/spring_2023/blob/main/_spring/src/main/java/io/thoughtscript/example/controllers/PasswordlessRestController.java

Spring: Asynchronous Programming

Java Spring accomplishes Asynchronous programming in three primary ways:

  1. CompletableFutures and Futures
  2. Executors, Threading, and the @Async keyword
  3. Asynchronous Messaging

@EnableAsync

Enables Spring support for Asynchronous programming:

@Configuration
@EnableAsync
public class AppConfig implements AsyncConfigurer {

    @Override
    public Executor getAsyncExecutor() {
        ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
        executor.setCorePoolSize(7);
        executor.setMaxPoolSize(42);
        executor.setQueueCapacity(11);
        executor.setThreadNamePrefix("MyExecutor-");
        executor.initialize();
        return executor;
    }

    @Override
    public AsyncUncaughtExceptionHandler getAsyncUncaughtExceptionHandler() {
        return new MyAsyncUncaughtExceptionHandler();
    }
 }

@Async

After the above is configured, one can use the @Async keyword to automatically wrap a Bean method with an Executor:

@Async
public void asyncMethodWithVoidReturnType() {
    System.out.println("Execute method asynchronously. " + Thread.currentThread().getName());
}

@Async
public Future<String> asyncMethodWithReturnType() {
    System.out.println("Execute method asynchronously - " + Thread.currentThread().getName());
    try {
        Thread.sleep(5000);
        return new AsyncResult<String>("hello world !!!!");
    } catch (InterruptedException e) {
        //
    }

    return null;
}

The Bean method should be:

  1. Public visibility so that Spring can proxy (an interface for) the Asynchronous method. (Consequently, the method can't be invoked within the same Class since doing so would bypass the proxy.)
  2. Have avoid, Future, or CompletableFuture return type.
  1. https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/scheduling/annotation/EnableAsync.html
  2. https://www.baeldung.com/spring-async
  3. https://spring.io/guides/gs/async-method/

Spring: Techniques

Spring Data Mongo

Remember that for two connections, one should use distinct Mongo configuration techniques like so:

package io.thoughtscript.example.configurations;

import com.mongodb.ConnectionString;
import io.thoughtscript.example.Constants;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

import org.springframework.data.mongodb.MongoDatabaseFactory;
import org.springframework.data.mongodb.core.MongoTemplate;
import org.springframework.data.mongodb.core.SimpleMongoClientDatabaseFactory;
import org.springframework.data.mongodb.repository.config.EnableMongoRepositories;

@Configuration
@EnableMongoRepositories(basePackages = "io.thoughtscript.example.repositories")
public class MongoConfiguration {

    @Bean
    public MongoDatabaseFactory mongoDatabaseFactory() {
        return new SimpleMongoClientDatabaseFactory(new ConnectionString("mongodb://localhost:27017/" + Constants.MONGO_DB_NAME));
    }

    @Bean
    public MongoTemplate mongoTemplate() throws Exception {
        return new MongoTemplate(mongoDatabaseFactory());
    }
}
package io.thoughtscript.example.configurations;

import com.mongodb.reactivestreams.client.MongoClient;
import com.mongodb.reactivestreams.client.MongoClients;
import io.thoughtscript.example.Constants;
import io.thoughtscript.example.repositories.LanguageMongoReactiveRepository;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.ComponentScan;
import org.springframework.context.annotation.Configuration;
import org.springframework.data.mongodb.config.AbstractReactiveMongoConfiguration;
import org.springframework.data.mongodb.core.ReactiveMongoTemplate;
import org.springframework.data.mongodb.repository.config.EnableReactiveMongoRepositories;

@Configuration
@EnableReactiveMongoRepositories(basePackageClasses = {LanguageMongoReactiveRepository.class})
@ComponentScan(basePackages = "io.thoughtscript.example.repositories")
class ReactiveMongoConfiguration extends AbstractReactiveMongoConfiguration {

  @Value("${spring.data.mongodb.host}")
  private String host;
  @Value("${spring.data.mongodb.port}")
  private Integer port;

  @Override
  protected String getDatabaseName() {
    return Constants.MONGO_DB_NAME;
  }

  @Override
  public MongoClient reactiveMongoClient() {
    return MongoClients.create();
  }

  @Bean
  public ReactiveMongoTemplate reactiveMongoTemplate() {
    return new ReactiveMongoTemplate(reactiveMongoClient(), getDatabaseName());
  }

}
  1. Reason: two configuration Classess that have overlapping Beans (say AbstractMongoClientConfiguration and AbstractReactiveMongoConfiguration which both initialize MappingMongoConverter mappingMongoConverter) will fail to initialize correctly in the Application Context.

Refer to: https://www.baeldung.com/spring-data-mongodb-tutorial

Mapping Domain to Tables

Many frameworks have in-built conventions that dictate which Classes are mapped to what Tables. While usually helpful, one will often want to override these default conventions.

There are two common ways of doing that. The first is encountered when using a full ORM (Hibernate) to map, populate, and serialize the Columns of a Row into a POJO 1-1:

@Entity(name = "MyTable")
@Table(name = "MyTable")

In some stacks, JPA Entity framework is used by itself and one can point SQL queries to another Table using @Entity alone (without @Table):

@Entity(name = "MyTable")
//@Table(name = "MyTable")

Refer to: https://www.baeldung.com/jpa-entity-table-names#jpqlTableName

Also: https://www.baeldung.com/hibernate-naming-strategy#1-manual-escaping-using-double-quotes

  1. https://www.baeldung.com/spring-data-mongodb-tutorial
  2. https://baeldung.com/jpa-entity-table-names#jpqlTableName
  3. https://www.baeldung.com/hibernate-naming-strategy#1-manual-escaping-using-double-quotes

Go: General Concepts

  1. Go is a Strongly Typed, Compiled, programming language that supports essential Object Oriented concepts without imposing a rigid, heavy-weight, OOD hierarchy.

  2. The Golang Empty Interface can serve as a kind of Generic:

    // GoLang generic
    // The empty interface specifies that any type may be passed
    func golangGeneric(val interface{}) {
      fmt.Println(val)
    }
  3. Export from files and Modules using capitalized Variable Names.

  4. Use make() for Arrays and Maps:

    nums := [][]int{{1,1},{0,0},{2,1},{100,200},{15,244}}
    a := make([]int, 5)
    
    var animals = make(map[string]string)
  5. Go has no Ternary Operator.

  6. Variables can be declared in two ways:

    • var:

      • Can be used in Global Scope (at the top of a Module/Package).

      • Optional declared Type.

        var a = 900
    • :=

      • Short Variable Declaration

      • Can only be declared in some local Scope.

        b := 100

Go: Object Oriented Design

Go uses Structs instead of Classes.

They are most akin to a kind of mutable Java Record or JavaScript Object IMO.

They lack internal Functions ("true Methods") and Constructors like in many other languages.

Inheritance

Go does not have pure Inheritance. It supports Weak Composition and Embedding:

type Animal struct {
    name string
}

// Weak Inheritance by Composition
type Mammal struct {
    Animal
}

// As a field
type Reptile struct {
    animalParts Animal
}

Polymorphism

Polymorphism is achieved through flexible Structs where an attribute plays the role of a Type.

Receiver Functions

type Animal struct {
    animalName string
    foodEaten string
    locomotionMethod string
    spokenSound string
}

func (a Animal) Eat() {
    fmt.Println(a.foodEaten + "\n")
}

func (a Animal) Move() {
    fmt.Println(a.locomotionMethod + "\n")
}

func (a Animal) Speak() {
    fmt.Println(a.spokenSound + "\n")
}

Methods (Functions that live within a Class or Instance) are implemented by Receiver Function pattern.

Visibility

Encapsulation is obtained and Access controlled through several techniques and in-built properties of the language:

  1. Export from files and Modules using capitalized Variable Names. (Non-capitalized Variable Names will not be exported - are functionally private.)
  2. Fields on Structs can be read and altered using familiar dot-notation: p.X + p.Y + p.Z.

Examples

type Animal struct {
    animalName string
    foodEaten string
    locomotionMethod string
    spokenSound string
}

cow := Animal{"cow", "grass", "walk", "moo"}
bird := Animal{"bird", "worms", "fly", "peep"}
snake := Animal{"snake", "mice", "slither" ,"hsss"}
// Note this is a weakly OOP language
// Note the lack of inheritance and constructors

type Point struct {
    X    float32
    Y    float32
    Z    float32
}

// Pseudo constructor/factory
// Note capitalized name indicates "to export"
func NewPoint(X float32, Y float32, Z float32) *Point{

    // Partially initialize object
    p := Point{X:X, Y: Y}
    p.Z = Z

    // Return pointer
    return &p
}

// Pseudo-class methods
// Call by value
// Copy of value made

func AddCoordinates(p Point) float32 {
    return p.X + p.Y + p.Z
}

// Receiver type function
// This is also how visibility is controlled:
// By exporting receiver methods but limiting exporting of structs (by using lower-case names)

func (p Point) AdditionReceiver () float32 {
    return p.X + p.Y + p.Z
}

Code samples:

  1. https://github.com/Thoughtscript/go_refresh/blob/master/courses/9-structsreceivers/main.go
  2. https://github.com/Thoughtscript/languages_2024/blob/main/go/ood/struct.inheritance.go

Go: Asynchronous Programming

Go uses WaitGroups, go routines, and the defer keyowrd to make both Multi-Threaded and Asynchronous Programming possible.

func dine(wg *sync.WaitGroup, philosophers []*Philosopher) {
    defer wg.Done()
    //...
wg := new(sync.WaitGroup)

wg.Add(1)
go dine(wg, philosophers)
wg.Wait()

Channels

The output of a go routine can be sent to a Go Channel:

func sortAsync(wg *sync.WaitGroup, arr []int, c chan []int) {
    defer wg.Done()
    fmt.Printf("Go begin: %v \n", arr)
    for i := 0; i < len(arr) - 1; {
        if arr[i] > arr[i+1] {
            orig := arr[i]
            arr[i] = arr[i+1]
            arr[i+1] = orig
            i = 0
        } else {
            i++
        }
    }
    c <- arr
    fmt.Printf("Go end: %v \n", arr)
}
// Create necessary resources
wg := new(sync.WaitGroup)
c := make(chan []int)
arrs := make([][]int, 4)

//...

for i := 0; i < len(arrs); i++ {
    wg.Add(1)
    fmt.Printf("Sorting: %v \n", arrs[i])
    go sortAsync(wg, arrs[i], c)
}

sOne := <- c

Code samples:

  1. https://github.com/Thoughtscript/go_refresh/blob/master/courses/12-gosort/main.go
  2. https://github.com/Thoughtscript/go_refresh/blob/master/courses/13-philosophers/main.go

Go: Pointers

Like C++, Go gives explicit control over the use of Pointers.

Useful to control scenarios where we'd encounter: Deep/Shallow Copying, Pass by Value, and Pass by Reference.

Address Of Operator and Dereferencing

var num int = 100
var numAddress *int = &num // declare a variable with pointer type '*int'
                           // obtain the address of 'num'
var derefNum int = *numAddress  // dereference back to the int value from the pointer type
*numAddress = 42 // update the value of dereference to 'numAddress'

Examples

var num int = 100
fmt.Println("pointers > <variable> 'num' has <value>:", num)

var numAddress *int = &num
fmt.Println("pointers > 'numAddress' obtains <pointer> via & <address operator> and <pointer type>: *int = &num ", numAddress)

var derefNum int = *numAddress
fmt.Println("pointers > <derefences> back to the <value> with <dereferencing operator> on 'numAddress': derefNum = *numAddress", derefNum)
fmt.Println("pointers > call & <address operator> on 'derefNum' to obtain <address>: &derefNum", &derefNum)
fmt.Println("pointers > can <dereference> back directly with & and * operators sequentially: *&derefNum", *&derefNum)

*numAddress = 42
fmt.Println("pointers > set <value> on <dereference> of 'numAddress', then get <address> from <pointer type>: *numAddress = 42", numAddress)
fmt.Println("pointers > <dereference> back to <value>: *numAddress ", *numAddress)
pointers > <variable> 'num' has <value>: 100
pointers > 'numAddress' obtains <pointer> via & <address operator> and <pointer type>: *int = &num  0xc0000a4090
pointers > <derefences> back to the <value> with <dereferencing operator> on 'numAddress': derefNum = *numAddress 100
pointers > call & <address operator> on 'derefNum' to obtain <address>: &derefNum 0xc0000a4098
pointers > can <dereference> back directly with & and * operators sequentially: *&derefNum 100
pointers > set <value> on <dereference> of 'numAddress', then get <address> from <pointer type>: *numAddress = 42 0xc0000a4090
pointers > <dereference> back to <value>: *numAddress  42

Code samples:

  1. https://github.com/Thoughtscript/languages_2024/blob/main/go/pointers/pointers.go

Go: Generics

Examples

any Type:

func anyParamTypeOne[T any](x T) T {
    return x
}

Empty Interface:

// Everything implements this interface and so 
// can be passed successfully here
func golangGeneric(val interface{}) {
    fmt.Println(val)
}

With a shared interface that types are implementing:

func exampleNine[TT W](s TT, t TT) {
    fmt.Println("reflection > exampleNine", s, t)
}

Code samples:

  1. https://github.com/Thoughtscript/languages_2024/blob/main/go/reflection/reflection.go#L86C1-L88C2
  2. https://github.com/Thoughtscript/languages_2024/blob/main/go/generics/generics.go
  3. https://github.com/Thoughtscript/languages_2024/blob/main/go/generics/emptyinterface.go

Go: Interfaces

Examples

type S interface{}

type W interface {
    M()
}

type AA struct {
    msg string
}

type BB AA

type CC BB

func (p AA) M() {
    fmt.Println(1 + 2)
}

// Each of BB and CC require the implementation method M() 
// they do not automatically or implicitly inherit like super()
func (p BB) M() {
    fmt.Println(1 + 2)
}

func (p CC) M() {
    fmt.Println(1 + 2)
}
  1. Implicit - no explicit implementation keyword required (such as implements in Java).
  2. Above, AA, BB, CC implement both S and W.
  3. If BB, CC did not implement M() they'd fail to implement S despite being related to AA directly through their type definitions.
  4. M() isn't implicitly inherited (not equivalent to super() in other languages) here.
  5. S is effectively the Empty Interface since it lacks any methods to be implemented (and will even allow say var X string to implement S).
    • Therefore, use an interface definition like W for actual parameter or type constraints.

Code samples:

  1. https://github.com/Thoughtscript/languages_2024/blob/main/go/interfaces/interfaces.go
  2. https://github.com/Thoughtscript/languages_2024/blob/main/java/src/main/java/thoughtscript/io/review/InterfaceAggregationTest.java

Go: Reflection

Allows one to reflect and introspect the properties of a Variable, struct, or type.

Examples

type W interface {
    M()
}

var wtype = reflect.TypeOf((*W)(nil)).Elem() // Obtain the type of interface W

func exampleEight(s AA, t any) {
    T := reflect.TypeOf(t)                    // Get the type of argument t
    if T.Implements(wtype) {                  // Ensure that T implements interface W
        fmt.Println("reflection > exampleEight", s, t)
    } else {
        panic("reflection > t doesn't implement W")
    }
}

Code samples:

  1. https://github.com/Thoughtscript/languages_2024/blob/main/go/reflection/reflection.go

Python: General Concepts

  1. Indentation matters:
    • Is enforced and will complain loudly if you don't indent correctly!
    • Has syntactic and semantic relevance:
      • def is one indentation less than code blocks.
      • The indentation can alter the meaning of a block (for instance, whether code terminates or not).
  2. Dynamically Typed
  3. Python is both Interpreted and Compiled:
    • It's Interpreted Just in Time.
      • So, it's Partially Interpreted.
    • However, it's also Partially Compiled (.pyc files).
  4. Object Oriented:
    • Supports Multiple Inheritance which Java does not.
    • Supports sub-classing without an explicit extends keyword (as in JavaScript and Java).
  5. None is the relevant Null-ish Type and Value:
    • None is Unique (a Singleton).
  6. except is the relevant catch keyword.
    • raise is the relevant throws keyword.
  7. Comparisons:
    • == compares sameness of Value
    • is compares by (strict) Object identity (by same hash of the Memory Address).
  8. self is the relevant this convention (passed by Argument only - it is de facto reserved since the convention is enforced and used through native Python).
  9. with is a convenient and concise shorthand offering the following benefits:
    • Scope (Context).
    • Implicit try-except block.
    • Limited error handling.
    • Variable declaration or alias.
  1. https://docs.python.org/3/library/functions.html#id
  2. https://www.geeksforgeeks.org/with-statement-in-python/

Code samples:

  1. https://github.com/Thoughtscript/_project_euler
  2. https://github.com/Thoughtscript/python-refresh
  3. https://github.com/Thoughtscript/python_api

Python: Data Structures

Numbers

  1. No max int. This makes Python attractive for handling very big numbers.
# https://projecteuler.net/problem=36

import math

if __name__ == "__main__":

    try:

        def is_palindrome(num_str):
            LEN = len(num_str)
            HALF = int(math.floor(len(num_str) / 2))

            for x in range(0, HALF, 1):
                if num_str[x] == num_str[LEN - 1 - x]:
                   continue
                else:
                    #print(num_str + " is not a palindrome")
                    return False

            #print(num_str + " is a palindrome")
            return True

        def to_binary(num):
            return format(num, 'b')

        # print(to_binary(585))

        def solve():

            result = []

            for x in range(0, 1000000, 1):
                num_str = str(x)
                A = is_palindrome(num_str)
                binary = to_binary(x)
                B = is_palindrome(binary)

                if A and B:
                    print("Double-base palindrome found: " + num_str)
                    result.append(x)

            print(result)

            sum = 0

            for x in range(0, len(result), 1):
                sum = sum + int(result[x])

            print("Sum found: " + str(sum))
            return sum

        solve()

    except Exception as ex:

        print("Exception: " + str(ex))

List

  1. Ordered sequence of items.
  2. Does not need to be singly typed (items can be of varying types).
lst = list()
lst.append("example")

if ("test" not in lst):
    print("not in")

if ("example" in lst):
    print("is in")

Tuple

  1. Immutable
  2. Corresponds to the mathematical concept of an ordered N-Tuples.
  3. Ordered.
# 3-tuple
exampleTuple = (1,2,3)

# Access tuples
print(exampleTuple[2])

# Tuples can't be changed - they are immutable
## With throw error if uncommented

### exampleTuple[1] = 44

# Destructuring
(x, y, z) = (9,10,11)
print(x)
print(y)
print(z)
print((x, y, z))

# Comparison - element by element
print(exampleTuple < (x, y, z))
print(exampleTuple > (1000, "", "Hello"))

String

line = "abcdefghijklmnop..."

# indexOf
startIndex = line.find("0.")

# Length of String
endIndex = len(line)

# Slice
line[startIndex:endIndex]

Sets

  1. Share curly brace reserved symbols {,} with Dictionaries but are element-valued only.
  2. Do not guarantee nor preserve order.
  3. Deduplicated.
  4. Corresponds to the mathematical concept of a Set.
thisset = {"apple", "banana", "cherry", "apple"}

Dictionary

  1. Key-Value data structure.
  2. Share curly brace reserved symbols {,} with Sets but are Key-Value.
exampleDictionary = {
    "field": "example",
    "attribute": "another",
    "numerical": 2
}

print(exampleDictionary)

# access value of dict by key
numericalOne = exampleDictionary.get("numerical")
print(numericalOne)

numericalTwo = exampleDictionary["numerical"]
print(numericalTwo)

# set value of dict
exampleDictionary["numerical"] = 4
print(exampleDictionary["numerical"])

# iterating through the dict
## keys
for x in exampleDictionary:
    print(x)

## values
for x in exampleDictionary:
    print(exampleDictionary[x])

for x in exampleDictionary.values():
    print(x)

## keys and values by destructuring
for x, y in exampleDictionary.items():
    print(x, y)

Boolean

  1. Capitalized.
False
True

Code samples:

  1. https://github.com/Thoughtscript/_project_euler/tree/main/_finished
  2. https://github.com/Thoughtscript/python-refresh/tree/master/courses

Python: Comprehensions

Comperehensions provide syntactic sugar for initializing iterables. They use the following succinct Generator Expression syntax:

  1. Element Comprehensions - x for x in range(0, 9)
  2. Key-Value Comprehensions - x : x + 1 for x in range(0, 9)
    • Left of the : specifies the Key and the right determines the resolved Value.

Generator Expressions

Very succinct in terms of written code and memory use. (Consider the extremely verbose initialization of Java nested Arrays with non-default values or static block initializations!)

print([y+y for y in range(0, 10)])
# [0, 2, 4, 6, 8, 10, 12, 14, 16, 18]

Consult: https://peps.python.org/pep-0289/

Tuple

Tuple Comprehension isn't supported since Tuples already use Generators (Generator Expressions to be more precise) under the hood.

Review: https://stackoverflow.com/questions/16940293/why-is-there-no-tuple-comprehension-in-python

List

example_list = [x for x in range(0, 9)]
print(example_list) # [0, 1, 2, 3, 4, 5, 6, 7, 8]

Set

example_set = {x for x in range(0, 9)}
print(example_set) # {0, 1, 2, 3, 4, 5, 6, 7, 8}

Dict

example_dict = { x : x + 1 for x in range(0, 9) }
print(example_dict) # {0: 1, 1: 2, 2: 3, 3: 4, 4: 5, 5: 6, 6: 7, 7: 8, 8: 9}
  1. https://stackoverflow.com/questions/16940293/why-is-there-no-tuple-comprehension-in-python
  2. https://peps.python.org/pep-0289/

Python: Object Oriented Design

Classes, Inheritance, and Multiple Inheritance

class Dog:
    breed = "grayhound"
    name = "fido"

class Cat:

    # constructor
    def __init__(self, name, breed):
        self.name = name
        self.breed = breed

    # instance method
    def meow(self, meow):
        ## string interpolation
        return "{} says meow {}".format(self.name, meow)

# executable code
cat = Cat("Sonata", "black cat")
print(cat.meow("meow")) # Sonata says meow meow

class RoboCat(Cat):

    # constructor with super
    def __init__(self, name, breed, metal):
        super().__init__(name, breed)
        self.metal = metal

# Multiple Inheritance
class CatDog(Cat, Dog):

    # constructor with super
    def __init__(self, name, breed):
        super().__init__(name, breed)

# executable code
catdog = CatDog("CatDog", "cat dog")
print(catdog.meow("meow")) # CatDog says meow meow

Code samples:

  1. https://github.com/Thoughtscript/python-refresh/blob/master/examples/4%20-%20dependency/dependency.py

Python: Modules and Packages

Python scripts and files can be organized or grouped into Modules and Packages.

By default, Modules can be imported (using the import keyword) using a Namespace corresponding to the named directory structure.

A Package can also be defined with a customized Namespace (the Namespace itself for instance) and is typified by the presence of __init__.py files at the root level of the Package.

Importing and Dependency Injection

Given:

+- /example
|   +- /dependencies
|       +- a.py
+- dependency.py
+- main.py

/dependencies/a.py:

class B:
    num = 0

dependency.py:

class Dog:
    breed = "grayhound"
    name = "fido"

# multiple classes in one script
class Cat:

    # constructor
    def __init__(self, name, breed):
        self.name = name
        self.breed = breed

    # instance method
    def meow(self, meow):
        ## string interpolation
        return "{} says meow {}".format(self.name, meow)

# executable code
cat = Cat("Sonata", "black cat")
print(cat.meow("meow"))

class RoboCat(Cat):

    # constructor with super
    def __init__(self, name, breed, metal):
        super().__init__(name, breed)
        self.metal = metal

main.py:

import dependency
import dependencies.a as A

if __name__ == '__main__':

    try:
        # dependency injection example
        robo = dependency.RoboCat("cat one", "Egyptian", "chrome")
        print(robo.meow("10101010011"))

        # a second dependency injection example
        B = A.B
        print("A.B " + str(B.num))

    except Exception as ex:

        print('Exception! ' + str(ex))

Review: https://github.com/Thoughtscript/python-refresh/tree/master/examples/4%20-%20dependency

Monkey Patching

Replacing a predefined Function or Method with another one (dynamically):

# Given the above imported classes...

def monkey_patch_function(self):
    print("I'm a monkey now!")

dependency.Cat.meow = monkey_patch_function

monkeycat = Cat("Monkey Cat", "money cat")
monkeycat.meow() # I'm a monkey now!

Packages

Consult: https://github.com/Thoughtscript/python_api/blob/main/server/init/__init__.py

  1. https://stackoverflow.com/questions/448271/what-is-init-py-for

Code samples:

  1. https://github.com/Thoughtscript/python-refresh/tree/master/examples/4%20-%20dependency
  2. https://github.com/Thoughtscript/python_api/blob/main/server/init/__init__.py

Python: Error Handling

None

None is the relevant nil, null, and undefined keyword, concept, and Object.

None is a Singleton - it is uniquely created and one copy exists.

Use the is keyword to check for None.

if x is None:
    # ...

Errors and Exceptions

In Java, an Error specifies an application-terminating event whereas an Exception is something that deviates from some expectation and ought to be handled in most circumstances.

Python Errors refer to syntax-specific concerns. Whereas an Exception is anything thrown (or "raised" via the raise keyword).

try:
    raise NameError('HiThere')

except NameError:
    print('An exception flew by!')

    # rethrow
    raise

https://docs.python.org/3/tutorial/errors.html#raising-exceptions

Exception Handling

Three primary keywords (try, except, and finally) are used to handle a raised Exception:

if __name__ == '__main__':
    try:
        # ...

    except Exception as ex:
        # Executes when error is thrown
        # ...

    else:
        # else can be supplied here
        # even with a subsequent finally clause

    finally:
        # Always executes regardless of thrown error
        # ...

Exception Groupings

Python supports many Exceptions being raised simultaneously without the need for more complex abstraction:

def f():
    # Define a List of Exceptions to pass
    excs = [OSError('error 1'), SystemError('error 2')]
    # raise and throw them all with an ExceptionGroup
    raise ExceptionGroup('there were problems', excs)

# ...

try:
    f()

# Use top-level Exception to catch them in an except clause
except Exception as e:
    print(f'caught {type(e)}: e')

https://docs.python.org/3/tutorial/errors.html#raising-and-handling-multiple-unrelated-exceptions

  1. https://docs.python.org/3/tutorial/errors.html
  2. https://docs.python.org/3/tutorial/errors.html#raising-exceptions
  3. https://docs.python.org/3/tutorial/errors.html#raising-and-handling-multiple-unrelated-exceptions

Code samples:

  1. https://github.com/Thoughtscript/_project_euler
  2. https://github.com/Thoughtscript/python-refresh

Python: Truth Values

Falsy Values

None

False

# Any zero number:
0, 0L, 0.0, 0j

# Any empty sequence:
'', (), []

# Any empty mapping:
{}

Truthy Values

All values that aren't Falsy are considered True.

  1. https://docs.python.org/2.4/lib/truth.html

Python: Pass By Assignment

Pass By Assignment

Other monnikers: Call by Object Reference or Call by Assignment.

Python uses a mix of approaches:

  1. Pass By Value - passes the Value and not the Object Reference as exhibited below:

     def call_by_value(val):
         val *= 2
         print("within call_by_value: " + str(val))
    
     num = 1
     print("before: " + str(num))
     call_by_value(num)
     print("after: " + str(num))
     before: 1
     within call_by_value: 2
     after: 1

    num is not modified in the outer scope.

  2. Pass By Reference - when the entire Object is passed per the following:

     def call_by_ref(exm_mp):
         exm_mp["a"] = "b"
         print("within call_by_ref: " + str(exm_mp))
    
     exm_mp = {}
     print("before: " + str(exm_mp))
     call_by_ref(exm_mp)
     print("after: " + str(exm_mp))
     before: {}
     within call_by_ref: {'a': 'b'}
     after: {'a': 'b'}

    exm_mp is modified in the outer scope and retains changes made within the Function.

  3. Pass By Assignment - a species of Pass By Reference that's specific to Python.

    Values can be associated with multiple References (similar to the way Java's String Pool allows the same String Value to be used for multiple String Variables).

     from sys import getrefcount
    
     print("--- Before  assignment ---")
     print(f"References to 1: {getrefcount(1)}")
     print(f"References to 2: {getrefcount(2)}")
     x = 1
     print("--- After   assignment ---")
     print(f"References to 1: {getrefcount(1)}")
     print(f"References to 2: {getrefcount(2)}")
     x = 2
     print("--- After reassignment ---")
     print(f"References to 1: {getrefcount(1)}")
     print(f"References to 2: {getrefcount(2)}")
     print("--- Addresses in Memory ---")
     print(id(x))
     print(id(1))
     print(id(2))
     print("--- Comparisons ---")
     print(id(x) == id(1))
     print(id(x) == id(2))
     print(x is 1)
     print(x is 2)
     print(x == 1)
     print(x == 2)

    Notice that the count changes based on how x is set. Also, the high number of References to 1 and 2 even before x References either.

     main.py:21: SyntaxWarning: "is" with a literal. Did you mean "=="?
       print(x is 1)
     main.py:22: SyntaxWarning: "is" with a literal. Did you mean "=="?
       print(x is 2)
     --- Before  assignment ---
     References to 1: 129
     References to 2: 111
     --- After   assignment ---
     References to 1: 130
     References to 2: 111
     --- After reassignment ---
     References to 1: 129
     References to 2: 112
     --- Addresses in Memory ---
     140174349960512
     140174349960480
     140174349960512
     --- Comparisons ---
     False
     True
     False # See Warning above
     True # See Warning above
     False
     True

Comparing Objects

Can use id() to find the unique identifier for an Object (the Address of the Object in memory):

num = 1
print(id(num)) # 132506274753480

(Object Reference is also used within is comparison checks.)

Consult: https://www.w3schools.com/python/ref_func_id.asp

Documentation: https://docs.python.org/3/library/functions.html#id

  1. https://realpython.com/python-pass-by-reference/#passing-arguments-in-python
  2. https://www.geeksforgeeks.org/is-python-call-by-reference-or-call-by-value/
  3. https://www.w3schools.com/python/ref_func_id.asp
  4. https://docs.python.org/3/library/functions.html#id

Python: Asynchronous Libraries

asyncio

Uses the familiar keywords and syntax of async, await in JavaScript:

import asyncio

async def count(run_var):
    print(run_var + "One")
    await asyncio.sleep(1)
    print(run_var + "Two")

# Must run and call async functions with await
async def main_method():
    await asyncio.gather(count("a"), count("b"), count("c"))

if __name__ == "__main__":
    import time
    s = time.perf_counter()
    # Use asyncio runner
    # https://docs.python.org/3/library/asyncio-runner.html
    asyncio.run(main_method())
    elapsed = time.perf_counter() - s
    print(f"{__file__} executed in {elapsed:0.2f} seconds.")
aOne
bOne
cOne
aTwo
bTwo
cTwo
  1. https://docs.python.org/3/library/asyncio.html
  2. https://docs.python.org/3/library/asyncio-runner.html

Python: Concurrency and Threading

AttributeError

Be forewarned that multiprocessing.Pool requires an imported function as its target (something not mentioned in the official documentation).

  1. https://stackoverflow.com/questions/41385708/multiprocessing-example-giving-attributeerror
  2. https://bugs.python.org/issue25053
  3. https://github.com/python/cpython/issues/69240
  1. https://docs.python.org/3/library/threading.html
  2. https://docs.python.org/3/library/multiprocessing.html
  3. https://stackoverflow.com/questions/41385708/multiprocessing-example-giving-attributeerror
  4. https://bugs.python.org/issue25053
  5. https://github.com/python/cpython/issues/69240

Code samples:

  1. https://github.com/Thoughtscript/python_processing_2024

Python: Techniques

Commenting

'''
I'm a multiline docstring!

I can be accessed via introspection: __doc___
'''

# I'm a single line comment

Review: https://docs.python.org/3/glossary.html#term-docstring

Float and Floor Division

'''
Float Division 
- Results in a float 
'''
5 / 2 # 2.5

'''
Floor Division 
- Results in an integer rounded down
'''
5 // 2 # 2

// behaves like Integer or Math.round() division in Java / JavaScript.

Consult also: https://docs.python.org/3/library/functions.html#divmod

args and kwargs

Most similar to Spread Operator ( ...) in JavaScript.

Provides Function Parameter and Argument flexiblity slightly akin to Java's Generic and Go's Empty Interface (in that supplied Arguments can vary beyond that which is singularly or precisely stipulated).

# Any number can be passed - variable length
# Can also just pass a List
def my_func_a(*args_example):
    for arg in args_example:
        print(arg)

# Use
my_func_a('a', 'b', 'c')

# Use in tandem with other required arguments
def my_func_b(arg, *args_example):
    for argv in args_example:
        print(argv)

# Use
my_func_b('arg', 'a', 'b', 'c', 'd')

# Use kwargs to pass in a varying number of arguments
# These are key-value pairs
def my_func_c(**kwargs):
    for key, value in kwargs.items():
        print("Key: {0} Value: {1}".format(key, value))

# Use
my_func_c(a='a', b='b', c='c', d='d')

Consider: https://nodejs.org/api/process.html#processargv process.argv in Node.

Pickling and Unpickling

Akin to Java's transient keyword which uses a customized hashing mechanism to divide a value into files and persist them.

The Pickle Module saves an Object or Value as a String within a File.

Unpickling involves deserializing that value back into an in-memory Python Object or Value.

Pass

pass can be supplied in leiu of code block:

def example_a():
    pass

# note the following isn't syntactically valid
def example_b():
    # nothing here

REST API

Using Flask and backing SQL database:

from init import get_app
from flask import request
from domain import scan_examples, get_example, delete_example, create_example, update_example

"""
DB endpoints.
"""

app = get_app()

@app.route("/api/db/example", methods=['POST'])
def postExample():
    name = request.form.get('name', '')
    result = create_example(name)

    return [str(result)]

@app.route("/api/db/examples", methods=['GET'])
def scanExamples():
    results = scan_examples()
    response = []

    for x in results:
        response.append(str(x))

    return response

@app.route("/api/db/example/<id>", methods=['PUT'])
def updateExample(id):
    name = request.form.get('name', '')
    result = update_example(id, name)

    return [str(result)]

@app.route("/api/db/example/<id>", methods=['GET'])
def getExample(id):
    result = get_example(id)

    return [str(result)]

@app.route("/api/db/example/<id>", methods=['DELETE'])
def deleteExample(id):
    result = delete_example(id)

    return [str(result)]
from init import db

"""
Domain.
"""

class Example(db.Model):
    id = db.Column(db.Integer, primary_key=True)
    name = db.Column(db.String(80), unique=False)

    def __init__(self, name):
        self.name = name

    # Must be valid JSON serializable string
    def __repr__(self):
        return '{ id: %s, name: %r }' %(self.id, self.name)

"""
Queries.
"""

def scan_examples():
    return Example.query.all()

def get_example(id):
    return Example.query.get(id)

def delete_example(id):
    example = Example.query.get(id)
    db.session.delete(example)
    db.session.commit()
    return example

def update_example(id, name):
    example = Example.query.get(id)
    example.name = name
    db.session.commit()

    result = Example.query.get(id)
    return result

def create_example(name):
    example = Example(name)
    db.session.add(example)
    db.session.commit()

    return example

def prepopulate():
    db.drop_all()

    db.create_all()

    example_a = Example('example_a')
    db.session.add(example_a)

    example_b = Example('example_b')
    db.session.add(example_b)

    example_c = Example('example_c')
    db.session.add(example_c)

    example_d = Example('example_d')
    db.session.add(example_d)

    db.session.commit()

Review: https://github.com/Thoughtscript/python_api/tree/main/server

Reverse Proxy

It's standard practice to put something like an nginx server out in front as a Reverse-Proxy for Web Applications deployed using Gunicorn or uvicorn:

This is akin to Tomcat serving as a Web Container for a specific Servlet or Spring app.

NAN

Check if x is a Number:

import numbers

if isinstance(x, numbers.Number):
    # ...
  1. https://docs.python.org/3/glossary.html#term-docstring
  2. https://docs.python.org/3/library/functions.html#divmod
  3. https://nodejs.org/api/process.html#processargv

Code samples:

  1. https://github.com/Thoughtscript/python_api/tree/main/server

PHP: General Concepts

  1. https://web.archive.org/web/20240628022553/https://x-team.com/blog/cms-to-waf-wordpress-and-react

Code samples:

  1. https://github.com/Thoughtscript/php_2024
  2. https://github.com/Thoughtscript/x_team_wp_react

JavaScript: General Concepts

  1. JavaScript is an interpreted, often transpiled, Object Oriented programming language.
  2. Prototype-based Object Oriented Design with optional Class syntactic sugar.
  3. Dynamically Typed - JavaScript uses Type Coercion to Dynamically Type values and variables at Run Time.
  4. Technically, "JavaScript" is a loose name for a family of dialects that implement the ECMAScript standard: https://www.ecma-international.org/technical-committees/tc39/ - officially, there is no "JavaScript" but the name persists.
  5. JavaScript Primitive Data Types (Number, String, BigDecimal, Boolean, etc.) Pass by Value. Arrays and Objects Pass by Reference (one cause for the phenomenon of Shallow Copying).

Use Strict

Strict Mode enforces stricter/more secure execution of a Script at Run Time:

  1. Hoisting and assignment to variables that weren't declared result in syntax errors.
  2. this is Autoboxed more stringently.
  3. eval() is handled more securely.
  4. Silent errors that are typically ignored are thrown verbosely.
'use strict'

//...

Refer to: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Strict_mode

JavaScript: Comparisons

Strict

Checks for exactly the same Value, Type, and Reference (Address in Memory):

a === b

Loose

Checks for the same Value after automatic conversion to a common Type (shared Type Coercion):

a == b

Object.is()

Special comparison for NaN and 0 along with some other important but "edgy" cases.

Refer to: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Comparison_Operators

Type Checking

There are three primary ways to Type Check:

Primitive checking:

typeof 'Hello World' === 'string';

Check whether an Object is an instance of a Class:

var x = new Dog();
x instanceof Dog;

Via constructor:

var x = 'Hello World';
x.constructor === String;

JavaScript: This Keyword

The this keyword allows one to manipulate the current Closure/scope.

Arrow Functions

Note that JavaScript Arrow Functions don't bind this to their Closure/scope (this will refer to the next-nearest available surrounding scope).

const A = function() {

    this.example = "hello, world"

    const B = () => {
        // Refers to the scope of A
        console.log(this.example) 
    }

    B()
}() // "hello, world"

Scope Binding

Consider the following:

var meeple = "Meeple One"

var myObj = {
    meeple: "Meeple Two",
    prop: {
        getMeeple: function() {
            return this.meeple
        },
        meeple: "Meeple Three"
    }
}

console.log(myObj.prop.getMeeple())

var test = myObj.prop.getMeeple

console.log(test())

The output will be:

"Meeple Three"
"Meeple One"

Remember that this gets bound to the Scope of the function called - in the case of test() the Scope is the outer Closure since it's defined in the top-level Scope (of the script file itself).

Bind, Call, Apply

Since this is relative to the scope in which it's used, it's sometimes necessary to use a JavaScript Function with a precisely specified this value.

// Bind, React example
this.setSubmitState = this.setSubmitState.bind(this)

https://gitlab.com/Thoughtscript/card_lookup_helper/-/blob/main/client/reactAppSrc/Components/Stateful/Submit/index.jsx?ref_type=heads#L19

// Call, Prototype example
function Example(msg) {
  this.msg = msg
}

function SubExample(msg, subexamplefield) {
  Example.call(this, msg)
  this.subexamplefield = subexamplefield
}

console.log(new SubExample('my msg', 'my subexample field'))
// Apply, native Math function example w/ null this
const max = Math.max.apply(null, [5, 6, 2, 3, 7])
  1. https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Function/bind
  2. https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Function/apply
  3. https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Function/call

Code samples:

  1. https://gitlab.com/Thoughtscript/card_lookup_helper/-/blob/main/client/reactAppSrc/Components/Stateful/Submit/index.jsx?ref_type=heads#L19

JavaScript: Falsy Values

Are interpreted as false:

    false  
    0 (zero)  
    "" (empty string)  
    null  
    undefined    
    NaN   

All values are truthy unless they are falsy.

JavaScript: Quirks

Some that are covered elsewhere:

  1. Shallow vs. Deep Copying
  2. Truthy and Falsy values (in general)
  3. Type Coercion (Auto-Boxing)

Empty Array Comparisons

Empty Array comparisons can result in some counter-intuitive consequences due to Type Coercion and Falsy Value resolution:

console.log([] === ![]) // false
console.log([] !== []) // true
console.log([] == ![]) // true

Default Numeric Sorting

One of JavaScript's oft-lamented quirks is that Number types are cast to Strings when sorted using default sorting:

const A = [1,2,3,4,5,6,7,7,8,9,10,11,12,13,14,15,16,17,18,19,20]
A.sort()
console.log(A)
// [1, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 2, 20, 3, 4, 5, 6, 7, 7, 8, 9]

However using a custom comparator that forces Numeric comparison:

//... 

A.sort((a,b) => {
    const A = parseInt(a), B = parseInt(b)
    if (A < B) return -1
    if (A > B) return 1
    return 0
})
//...

// [1, 2, 3, 4, 5, 6, 7, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20]

Numeric Object Keys

Object key values don't have the above quirk. They are treated as Numbers:

const M = {}

M[1] = true
M[21] = true
M[2] = true
M[12] = true
M[3] = true
M[11] = true

const OK_M = Object.keys(M)

for (let i = 0; i < OK_M.length; i++) {
    console.log(`${OK_M[i]} ${M[OK_M[i]]}`)
}

/*
    "1 true"
    "2 true"
    "3 true"
    "11 true"
    "12 true"
    "21 true"
*/

JavaScript: Variables

  1. let - specifies a mutable variable declaration, scoped to the immediate local Closure scope.
  2. const - immutable once declared, specifies a Constant.
  3. var - original, general, not immutable, not scoped, Hoisted.

Advantages of Let vs. Var

Since let is scoped to a local context, doing the following:

for (let i = 0; i < els.length; i++) {
    els[i].addEventListener('click', function() {
        console.log(i)
    })
}

will correctly print the right number i. If var is used instead each button will only print els.length - 1.

JavaScript: Prototypes

JavaScript's Object Oriented

Examples

var Human = function(name) {
    this.name = name;
}

Human.prototype.getName = function() {
    return this.name;
}

Note that Prototype names must be written using Pascal Notation.

Object Oriented Design

Define a Prototype:

var Dog = function(bark) {
    this.bark = bark;
};

//Call constructor
var p = new Dog("woof");
// Add a method to Dog.prototype
Dog.prototype.tail = function(){
    console.log("wags");
};

Inheritance:

function Poodle(bark, name) {
    // Call the parent constructor
    Dog.call(this, bark);
    this.name = name;
};

Or explicitly set prototypical inheritance using Object.create():

//Explicitly set prototypical inheritance
//Allows overriding parent methods
Poodle.prototype = Object.create(Dog.prototype);

Test it out:

var d = new Poodle("bark bark", "dooder");
d.tail(); //"wags"

console.log(d instanceof Dog);  // true 
console.log(d instanceof Poodle); // true

JavaScript: Classes

class A {
    // https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Using_classes#private_fields
    #privatefieldA = 2 // # affix indicates field
    #privatefieldB = -1 // # affix indicates field

    // https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Using_classes#static_properties
    static staticfield = "I'm only accessible through Class A not an Instance of"

    // https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Using_classes#accessor_fields
    get privatefieldA() {
        return this.#privatefieldA // Getter can expose privatefield like usual
    }
}

class B extends A {
    publicfieldB = "I'm publicfieldB"

    constructor(constructorfield = "I'm initialized!"){
        super() // Must be present before this
        this.publicfieldA = "I'm publicfieldA"
        this.constructorfield = constructorfield
    }
}

class C extends B {
    // https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Using_classes#constructor
    constructor(){
        super("I'm initialized from the subclass!") // Must precede this 
    }

    static staticmethod() {
        return "I'm a static method!"
    }
}

const a = new A()
console.log(`Class A: ${A}`)
console.log(`Object a: ${JSON.stringify(a)}`)
console.log(A.staticfield)
//console.log(a.#privatefieldA) // error
console.log(a.privatefieldA) // is accessible through getter
//console.log(a.#privatefieldB) // error
console.log(a.privatefieldB === undefined) // no getter

const b = new B()
console.log(`Class B: ${B}`)
console.log(`Object b: ${JSON.stringify(b)}`)
console.log(b.publicfieldA)
console.log(b.publicfieldB)
console.log(b.constructorfield)
//console.log(b.#privatefieldA) // error
console.log(b.privatefieldB === undefined) // no getter

const c = new C()
console.log(`Class C: ${C}`)
console.log(`Object c: ${JSON.stringify(c)}`)
console.log(c.constructorfield)
console.log(C.staticmethod())
//console.log(c.#privatefieldA) // error
  1. https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Using_classes

Code samples:

  1. https://github.com/Thoughtscript/languages_2024/blob/main/node/classes/ClassExample.js

JavaScript: Promises

  1. Promises - an Object for handling Asynchronous, non-blocking, operations.
  2. Async/Await - syntactic sugar for returning and handling a Promise

Examples

  1. Must wrap all await keywords within an async method.
const asyncFunc = async (x) => {
    return 2
}

const EX = async () => {
    const V = await asyncFunc(1)
    console.log(V) //2
}

EX()
const assignAsyncVal = () => new Promise((resolve, reject) => {
    asyncFunc(1).then(success => {
        return resolve(success)
    })
})

const V = assignAsyncVal().then(success => {
    console.log(success) //2
})

Using await with the Promise function assignAsyncVal() above:

const EX = async () => {
    const VV = await assignAsyncVal()
    console.log(VV) //2
}

EX()

Many Promises

To resolve multiple Promises, push them into an Array and use Promise.all(), Promise.allSettled(), etc.

const A = new Promise((resolve, reject) => {})
const B = new Promise((resolve, reject) => {})

const PROMISES = []
PROMISES.push(A)
PROMISES.push(B)

// Resolve all promises
Promise.all(PROMISES).then(success => {
    //...
})

Async Error Handling

Use .then() and .catch() for all Error and Rejection handling:

T()
    .then(success => { console.log(success) })
    .catch(ex => console.error(ex.message))

Note: don't pass , fail => { //... } in then() when using this syntax.

Use try-catch for blocking operations:

try {
    await T()
    //...

} catch (ex) {
    //...
}

Note: setTimeout() and other Asynchronous operations won't get handled by the catch block by itself.

Use process.on('uncaughtException', //...) to catch all (Synchronous or Asynchronous) unhandled Process-level Exceptions:

try {
    process.on('uncaughtException', exception => { console.error(exception}) })

} catch (ex) {
    //...
}

Consult: https://github.com/Thoughtscript/async_error_node for examples.

  1. https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise/all

Code samples:

  1. https://github.com/Thoughtscript/async_error_node

JavaScript: Asynchronous Loops

Given the following helper method:

const print = str => {
  const el = document.getElementById("here");
  const txt = document.createTextNode(str);
  el.append(txt);
  const br = document.createElement("br");
  el.append(br);
  console.log(str);
};

Synchronous Loops

A few examples.

const firstExample = () => {
  let i;
  for (i = 0; i < 10; i++) {
    print("1st middle: " + i);
  }
  print("1st end: " + i);
};
firstExample();
const secondExample = () => {
  let i;
  for (i = 0; i < 10; i++) {
    print("2nd middle: " + i);
  }
  print("2nd end: " + i);
  return i;
};
let j = 0;
j = secondExample();
print("J 2nd: " + j);
const fourthExample = () => {
  let y = 0;
  for (let i = 0; i < 10; i++) {
    y++;
    print("4th middle: " + y);
  }
  print("4th end: " + y);
  return y;
};

let v = 0;
v = fourthExample();
print("V 4th: " + v);
1st middle: 0
1st middle: 1
1st middle: 2
1st middle: 3
1st middle: 4
1st middle: 5
1st middle: 6
1st middle: 7
1st middle: 8
1st middle: 9
1st end: 10

2nd middle: 0
2nd middle: 1
2nd middle: 2
2nd middle: 3
2nd middle: 4
2nd middle: 5
2nd middle: 6
2nd middle: 7
2nd middle: 8
2nd middle: 9
2nd end: 10
J 2nd: 10

4th middle: 1
4th middle: 2
4th middle: 3
4th middle: 4
4th middle: 5
4th middle: 6
4th middle: 7
4th middle: 8
4th middle: 9
4th middle: 10
4th end: 10
V 4th: 10

Asynchronous Loops

Given:

const thirdExample = () => {
  return setTimeout(() => {
    let y = 0;
    for (let i = 0; i < 10; i++) {
      y++;
    }
    print("3rd end: " + y);
    return y;
  }, 1000);
};
let z = 0;

z = thirdExample();
print("Z 3rd: " + z);

Intuitively, we think that the output should be something like:

// Incorrect assumption
3rd end: 10
Z 3rd: 10

Given:

const fifthExample = () => {
  let y = 0;
  setTimeout(() => {
    y = 10000;
  }, 4000);
  return y;
};

print("5th: " + fifthExample());

It's natural to think the output will be:

// Incorrect assumption
5th: 10000

The actual chronological output:

Z 3rd: 1

5th: 0

3rd end: 10

To remedy those counter-intuitive outputs wrap the methods with a Promise.

Event Stack

for (let i = 0; i < 3; i++) {
   console.log(1)
   setTimeout(() => console.log(i), 0)
   setTimeout(() => console.log(2), 0)
   setTimeout(() => console.log(3), 100)
   console.log(4)
}

In chronological order:

1
4
1
4
1
4

0
2
1
2
2
2

3
3
3

Perhaps counter-intuitively:

  1. Both console.log(1) and console.log(4) print before the rest (even the setTimeout() calls with 0 delay). This has to do with the way that JavaScript handles Events on the Stack. setTimeout() calls are lower priority Events and are executed last, even with a 0 delay.
  2. One might think that all console.log(3) calls would print prior to each console.log(4). They all occur after the fact. JavaScript essentially ignores any asynchronous blocking behavior within a loop.

Button Event Listeners

  1. Use let instead of var.
  2. Always lookup "fresh" info within the actual event callback rather than supplying some info at initialization since it will likely both be (1) out-of-date and (2) asynchronously invalid.

Refer to the article on JavaScript Variables.

JavaScript: Deep and Shallow Copies, Merging

Deep Copying creates a copy of an object that share no references in memory.

Shallow Copying creates a copy of an object that shares a reference in memory.

Deep Merging allows the deeply nested fields of two objects to be added to, combined, or merged with one of those objects.

From the Reference Documentation

“A deep copy of an object is a copy whose properties do not share the same references (point to the same underlying values) as those of the source object from which the copy was made. As a result, when you change either the source or the copy, you can be assured you're not causing the other object to change too; that is, you won't unintentionally be causing changes to the source or copy that you don't expect. That behavior contrasts with the behavior of a shallow copy, in which changes to either the source or the copy may also cause the other object to change too (because the two objects share the same references).”

Shallow Copying

Consider the following tensor of rank 1, a single dimension array:

const X = [1,2,3,4,5,6,7,8]
const Y = [...X]
X[0] = 1000
console.log(X) // [ 1000, 2, 3, 4, 5, 6, 7, 8 ]
console.log(Y) // [ 1, 2, 3, 4, 5, 6, 7, 8 ]

When the array being duplicated is of rank > 1:

const A = [[1,2,4,5,6,7,8],[1,2,4,5,6,7,8]]
const B = [...A]
A[0][0] = 1000
console.log(A) // [ [ 1000, 2, 4, 5, 6, 7, 8 ], [ 1, 2, 4, 5, 6, 7, 8 ] ]
console.log(B) // [ [ 1000, 2, 4, 5, 6, 7, 8 ], [ 1, 2, 4, 5, 6, 7, 8 ] ]

Deep Copying

The recommended way now is to use:

JSON.parse(JSON.stringify(ingredients_list)) // Object
const A = [[1,2,4,5,6,7,8],[1,2,4,5,6,7,8]] // Tensor of rank 2, matrix, m x n array where n > 1

// By element
const R = []
for (let r = 0; r < A.length; r++) {
    const row = []
    for (let c = 0; c < A[r].length; c++) {
            row.push(A[r][c])
    }
    R.push(row)
}

R[0][0] = 1000
console.log(R) // [ [ 1000, 2, 4, 5, 6, 7, 8 ], [ 1, 2, 4, 5, 6, 7, 8 ] ]
console.log(A) // [ [ 1, 2, 4, 5, 6, 7, 8 ], [ 1, 2, 4, 5, 6, 7, 8 ] ]
const A = [[1,2,4,5,6,7,8],[1,2,4,5,6,7,8]] // Tensor of rank 2, matrix, m x n array where n > 1

// By row spread operator
const R = []

for (let r = 0; r < A.length; r++) {
    // Single dimensioned arrays are deep copied by the spread operator.
    R.push([...A[r]])
}

R[0][0] = 1000
console.log(R) // [ [ 1000, 2, 4, 5, 6, 7, 8 ], [ 1, 2, 4, 5, 6, 7, 8 ] ]
console.log(A) // [ [ 1, 2, 4, 5, 6, 7, 8 ], [ 1, 2, 4, 5, 6, 7, 8 ] ]

Deep Merging

Object.assign({}, obj) 

JavaScript: Node

Example main method wrapped with try-catch clauses and Proccess exception handlers:

'use strict'

try {
  process.on('warning', warning => { console.error(`Warning encountered: ${warning}`) })
  process.on('unhandledRejection', rej => { console.error(`Unhandled Rejection override: ${rej}`) })
  process.on('uncaughtException', exception => { console.error(`Error encountered: ${exception}`) })
  process.on('exit', msg => { console.log(`Service shutting down: ${msg}`) })

//...

} catch (ex) {
  console.error(`Exception ${ex}!`)
  process.exit()
}

Workers

const {Worker } = require('worker_threads')

const createWorker = (filename, paramsToExecute) => new Promise((resolve, reject) => {

    //Constructor
    //Will pass paramsToExecute to the method executed in filename
    //Must have workerData as attribute
    const W = new Worker(filename, {workerData: paramsToExecute })

    //Listeners in parent thread
    W.on('message', message => {
        console.log(`Worker message received: ${message}!`)
        resolve(message)
    })

    W.on('error', error => {
        console.error(`Worker error encountered: ${error}!`)
        reject(error);
    })

    W.on('exit', exitCode => {
        if (exitCode !== 0) {
            console.error(`Worker stopped with exit code ${exitCode}`)
            reject(exitCode)
        } else {
            console.log(`Worker stopped with exit code ${exitCode}`)
            resolve(exitCode)
        }
    })

    //Send message to worker script
    W.postMessage('I am initialized...')

})

//Wrap this with a Thread Pool and/or Thread count to prevent excessive resourcing
const executeServiceUsingThread = (filename, paramsToExecute) => new Promise((resolve, reject) => {
    createWorker(filename, paramsToExecute).then(success => {
        console.log(`Service completed: ${success}!`)
    }, failure => {
        console.error(`Service completed: ${failure}!`)
    })
})

module.exports = {
    executeServiceUsingThread: executeServiceUsingThread
}
"use strict"

/**
 * Note: the WebWorker importScripts() cannot be used here.
 *
 * These are essentially NodeJS compliant scripts
 */

const { workerData, parentPort } = require('worker_threads')

//console.log test

console.log('Test console log inside Worker One')
console.log(`Getting Worker One workerData: ${workerData}`)

const conversationMappings = {
    "Hello!": "Goodbye!"
}

const exampleDependencyFunction = text => conversationMappings[text]

parentPort.postMessage(`Worker one: ${workerData} - message response: ${exampleDependencyFunction(workerData)}!`)
//...
WT.executeServiceUsingThread('./nodeThread/workerOne/worker_script_one.js', "Hello!")

HTTP

module.exports = {
  createHttpServer: () => {
    const S = require('http').createServer(require('./express').createServer())

    console.log('HTTP initialized!')
    console.log('REST API controller initialized!')

    S.on('clientError', (err, sck) => {
      const e = `HTTP/1.1 400 Bad Request! ${err}`
      console.error(e)
      sck.end(e)
    })

    S.listen(require('../config').SERVER.PORT, () => { console.log(`HTTP server started on port: ${S.address().port}`) })

    return S
  }
}

HTTPS

With Express:

const C = require('../config'), FS = require('node:fs')

module.exports = {
  createHttpsServer: () => {
    const OPTS = {
      key: FS.readFileSync(C.SERVER.SSL.KEY_PATH),
      cert: FS.readFileSync(C.SERVER.SSL.CERT_PATH)
    }

    const S = require('node:https').createServer(OPTS, require('./express').createServer())

    console.log('HTTPS initialized!')

    S.listen(C.SERVER.HTTPS_PORT, () => { console.log(`HTTPS server started on port: ${S.address().port}`) })

    return S
  }
}
const express = require('express'),
  C = require('../config')

module.exports = {
  createServer: () => {
    const app = express()

    app
      .use(require('morgan')('dev'))
      .use(express.json())
      .use(express.urlencoded({ extended: true }))

      .use(require('cookie-parser')())

      .use(require('cors')({
        origin: C.SERVER.CORS,
        optionsSuccessStatus: 200
      }))

      .use('/api', require('./api'))

    return app
  }
}
const express = require('express'),
  privateApi = express.Router(),
  C = require('../config')

privateApi
  .all('*', async (req, res, next) => await require('./auth').LOG_IP_FILTER(req, res, next))

  .all('*', async (req, res, next) => await require('./auth').DECRYPT(req, res, next))

  .all('*', async (req, res, next) => await require('./auth').AUTH_CHECK(req, res, next))

  // https://localhost:8888/api/projections?auth=...
  .get("/projections", async (req, res) => {
    let responseData = await require('./domain/budgetprojections').BUDGET_PROJECTIONS({})
    let stringData = require('./auth').SCRAMBLE(C.AUTH.V(), C.AUTH.BP(), JSON.stringify(responseData))
    return res.send({ status: 200, data: stringData })
  })

  //...

Exports and Imports

module.exports = {
    myExportedFunction: () => {
      //...
    }
}
const R = require('../path/to/my/file.js')

R.myExportedFunction()

Code samples:

  1. https://github.com/Thoughtscript/mean_2023/tree/main/node
  2. https://github.com/Thoughtscript/kin_insurance_js
  3. https://github.com/Thoughtscript/node-2021
  4. https://github.com/Thoughtscript/node_apis_notes

JavaScript: Imports

Named Imports

export const

export const
    BASE_PATH = '/',
    HOME_PATH = '/home',
    API_PATH = '/api',
    RAILS_API_URL = 'http://localhost:3000/examples/all'
import { BASE_PATH, HOME_PATH, API_PATH } from '../../../Constants'

Default Imports

export default

//...

export default () =>
    <header>
        <h2>React 2024</h2>
        <Menu/>
    </header>
import CustomHeader from '../../Presentation/CustomHeader'

Namespace Imports

Resolve namespace conflicts, assign a custom monniker, or get all exports from a namespace, etc.

//...
import * as my_alias from 'my_library'

Side Effects and Files

Import to load, execute, or use a script, file, or asset.

import 'my_script'

import './MyCoin.css'

CommonJS and ES Modules

Node

module.exports = {
    MY_EXPORT: //..
}
const A = require('my_module').MY_EXPORT

Consult:

  1. https://nodejs.org/api/esm.html#require
  2. https://nodejs.org/api/modules.html
  1. https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/import
  2. https://nodejs.org/api/esm.html#require
  3. https://nodejs.org/api/modules.html

Code samples:

  1. https://github.com/Thoughtscript/rrr_2024/tree/main/react/reactAppSrc
  2. https://github.com/Thoughtscript/erc20_2024/blob/main/react/reactAppSrc/Components/Stateful/MyCoinPage/index.jsx

JavaScript: Typescript

Interfaces

// Interfaces are struct-like with typed constraints
export interface Example {
    fieldA: string
    fieldB: number
    fieldC: boolean
}

// variable types are specified via:
const E: Example = {
    fieldA: "example",
    fieldB: 1,
    fieldC: true
}

// interfaces can be combined with class and constructor syntax
export class Example implements E {
    fieldA = ""
    fieldB = 0
    fieldC = false

    constructor (fieldA: string, fieldB: number, fieldC: boolean) {
      this.fieldA = fieldA
      this.fieldB = fieldB
      this.fieldC = fieldC
    }
}

const EE = new Example('a', 3, true)

https://github.com/Thoughtscript/ts_serverless_exp/blob/master/src/interfaces/example.interface.ts

Types

type A = {
    id: number
}

type B = {
    name: string
}

// Types can be combined (intersect, union, simple polymorphism, etc.)
type C = A & B

interface D {
    fieldA: string,
    fieldB: string,
    fieldC: number,
}

// Can pick specific fields (think subtype)
type E = Pick<D, "fieldA">;

const A: A = {
    id: 1
}

const C: C = {
    id: 1,
    name: "pronomen"
}

const E: E = {
    fieldA: 'message'
}

https://github.com/Thoughtscript/ts_serverless_exp/blob/master/src/types/example.type.ts

  1. https://www.typescriptlang.org/docs/

Code samples:

  1. https://github.com/Thoughtscript/ts_serverless_exp/tree/master
  2. https://github.com/Thoughtscript/typescript_classes

JavaScript: Web Workers

WorkerManager

<html>
<head>
    <meta charset='UTF-8'>
    <title>WebWorker Example</title>
</head>
    <body>
        <h3>WebWorker Request:</h3>
        <h id="request">Hello!</h>
        <h3>WebWorker Responses:</h3>
        <p id="response"></p>
    </body>
    <script type="text/javascript" src="workerManager.js"></script>
    <script>
        var wm = new WorkerManager();
        wm.startWrapperExample(document.getElementById("request").textContent);
    </script>
</html>

Using a WorkerManager wrapper with two example implementations: BlobWorker and ScriptWorker.

//workerManager.js

"use strict";

/** BEGIN WorkerManager Prototype */

var WorkerManager = function () {
};

WorkerManager.prototype.Worker = null;

WorkerManager.prototype.opts = {
    inline: false,
    scriptWorker: {
        script: "scriptWorker.js"
    },
    blobWorker: "self.addEventListener('message', function(e) {" +
    "self.postMessage('Goodbye!');" +
    "}, false);" 
};

WorkerManager.prototype.createBlobWorker = function () {
    var blob = window.URL.createObjectURL(new Blob([WorkerManager.prototype.opts.blobWorker]));
    this.Worker = new Worker(blob);
};

WorkerManager.prototype.createScriptWorker = function () {
    this.Worker = new Worker(this.opts.scriptWorker.script);
};

WorkerManager.prototype.isSupported = function () {
    return window.Worker;
};

WorkerManager.prototype.startWrapperExample = function (t) {
    if (this.isSupported()) {
        if (this.opts.inline) this.createBlobWorker();
        else this.createScriptWorker();
        if (this.Worker != null) {
            this.Worker.postMessage(t);
            this.Worker.addEventListener('message', function (e) {
                document.getElementById('response').textContent = e.data;
            }, false);
        }
    }
};

/** END WorkerManager Prototype */

ScriptWorker

//scriptWorker.js

"use strict";

importScripts('./scriptWorkerDependency.js');

self.addEventListener('message', function (e) {
    self.postMessage(exampleDependencyFunction(e.data));
}, false);

ScriptWorker Dependency

// scriptWorkerDependency.js

"use strict";

var conversationMappings = {
    "Hello!": "Goodbye!"
};

var exampleDependencyFunction = function(text) {
    return conversationMappings[text];
};

JavaScript: Techniques

Initialize Many EventListeners

To initialize an EventListener on a DOM Element:

const el = document.getElementById('todos')

for (let i = 0; i < el.children.length; i++) {
    const c = el.children[i]

    c.addEventListener('click', (e) => {
        console.log(e.target.id)
        e.target.style.display = 'none'
        e.preventDefault
    })
}

Remember that by the time the Element is clicked:

  1. c will lose its value - it's disassociated with the Element by that point.
  2. e.target is still the preferred, official, way to access the Attributes on the Element, the Element itself, etc.
  3. this will refer to the top-level scope and shouldn't be used.

In short, use e and e.target to access the propagated DOM Event and current Element, respectively. (Similar to React's ref.current.)

Fastest Way To Remove An Index

Supposedly the fastest way to remove an index in an array is as follows:

const todos = []

for (let i = 0; i < todos.length; i++) {
    if (todos[i].uuid === uuid) {
        // Remove by index
        todos.splice(i, 1)
        break
    }
}

Vs. deep removal:

const test = [1, 2, 3, 4, 5, 6, 7, 8, 9, 0, "a", 'b', 'c', 'd', 'e', "f", 'g']

const deepRmv = key => {
  const temp = [...test]
  let result = []

  const BEGIN = new Date()
  for (let i = 0; i < temp.length; i++) {
    if (temp[i] === key) continue
    result.push(temp[i])
  }
  const END = new Date()

  console.log(`deepRmv time: ${END-BEGIN} result: ${result}`)

  return result
}

const rmvBySlice = key => {
  const temp = [...test]

  const BEGIN = new Date()
  for (let i = 0; i < temp.length; i++) {
    if (temp[i] === key) {
      temp.splice(i, 1)
      break
    }
  }
  const END = new Date()

  console.log(`rmvBySlice time: ${END-BEGIN} result: ${temp}`)

  return temp
}

deepRmv("d")
rmvBySlice("d")
"deepRmv time: 0 result: 1,2,3,4,5,6,7,8,9,0,a,b,c,e,f,g"
"rmvBySlice time: 0 result: 1,2,3,4,5,6,7,8,9,0,a,b,c,e,f,g"

Probably best to use for loop with splice since a tad more succinct writing-wise.

https://javascript.plainenglish.io/how-to-remove-a-specific-item-from-an-array-in-javascript-a49b108404c shows that for loop with splice is 5x faster than indexOf with splice.

Common Operations

const ARR = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

const first = ARR.shift() // shift - remove and return element from index 0
console.log(first) // 1
console.log(ARR) // [2, 3, 4, 5, 6, 7, 8, 9, 10]

ARR.unshift(first) // unshift - add element to index 0
console.log(ARR) // [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

ARR.splice(4, 5) // splice(i, j) - remove j element(s) from and including i
console.log(ARR) // [1, 2, 3, 4, 10]

ARR.splice(3, 0, 11) // splice(i, j) - remove j element(s) from and including i
console.log(ARR) // [1, 2, 3, 11, 4, 10]

const sliced = ARR.slice(0, 3) // slice(i, j) - return elements between i and j (inclusive-exclusive) without altering ARR
console.log(sliced) // [1, 2, 3]
console.log(ARR) // [1, 2, 3, 11, 4, 10]
  1. https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/shift
  2. https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/unshift
  3. https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/splice
  4. https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/slice

Benchmarking

A cool benchmarking site: https://jsbench.me/nyla6xchf4/1

BigInt

Number to BigInt conversion:

const A = 9007199254740991n
const B = BigInt(9007199254740991)
const C = 1n
const D = C / B
  1. https://javascript.plainenglish.io/how-to-remove-a-specific-item-from-an-array-in-javascript-a49b108404c
  2. https://jsbench.me/
  3. https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/shift
  4. https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/splice
  5. https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/slice

React: Lifecycle

Can be loosely divided into three primary stages:

Re-rendering included as needed based on the render operation called. (Consult the detailed Component API reference: https://reactjs.org/docs/react-component.html.) A more specific breakdown:

Mounting – key methods:

  1. constructor()
  2. render()
  3. componentDidMount()

Updating – key methods, rerendering:

  1. shouldComponentUpdate()
  2. render()
  3. componentDidUpdate()
  4. setState()
  5. forceUpdate()

Unmounting – key methods:

  1. componentWillUnmount()

Code samples:

  1. https://github.com/Thoughtscript/x_team_wp_react/tree/master/xteamClient
  2. https://gitlab.com/Thoughtscript/budget_helper/-/tree/main/client

React: Virtual and Shadow DOM

The Shadow DOM is now part of JavaScript's Web APIs:

A Virtual DOM is sometimes used in parallel with or on top of the common Shadow DOM:

React used to support a React-specific Virtual DOM as an additional React feauture:

Vue and Angular support using the Shadow DOM:

  1. https://developer.mozilla.org/en-US/docs/Web/API/Web_components/Using_shadow_DOM
  2. https://en.wikipedia.org/wiki/Virtual_DOM
  3. https://vuejs.org/guide/extras/rendering-mechanism
  4. https://legacy.reactjs.org/docs/faq-internals.html
  5. https://www.geeksforgeeks.org/reactjs-virtual-dom/
  6. https://dev.to/zokizuan/angular-adventure-deep-dive-into-angulars-view-encapsulation-ml
  7. https://www.npmjs.com/package/vue-shadow-dom
  8. https://vuejs.org/guide/extras/web-components

React: State, Props

  1. Props - immutable, settings, configuration, initialized values, etc. that are passed one-way (unidirectionally) from a wrapping parent element into a child element.
  2. State - the mutable, local Component state.

Props

Consider the following example. A component CustomFooter has several child Components namely CustomLink.

Values are supplied for url and label which are passed from CustomFooter into each CustomLink as Props.

// CustomFooter

import React from 'react'
import CustomLink from '../CustomLink'
import './CustomFooter.css'

export default () =>
    <footer>
        <ul>
            <li><CustomLink url={'https://www.linkedin.com/in/adamintaegerard/'} label={'LinkedIn'}/></li>
            <li><CustomLink url={'https://thoughtscript.io/landing.html'} label={'Thoughtscript.io'}/></li>
        </ul>
    </footer>

CustomLink then accesses the supplied Props through destructuring.

It then uses the supplied values for url and label to populate the href and text values for the rendered Anchor tag:

// CustomLink

import React from 'react'

export default ({url, label}) => <a href={url} rel="nofollow noopener noreferrer" target="_blank">{label}</a>

State

Consider the following example:

  1. AccountSummaries initializes its state Object within the constructor call.
  2. A token is made via makeToken() to authenticate against the endpoint BUDGET_HELPER_API_URL.
  3. An async HTTP GET Request is made against the endpoint BUDGET_HELPER_API_URL.
  4. The Response Object is parsed and checked for a valid Status Code.
  5. The resultant data is then set into State, updating the Component state Object.
  6. Setting or modifying State typically involves the Component re-rendering.
  7. Values in State are destructured const { accounts, authorized } = this.state and made available during Render.
import React from 'react'
import './AccountSummaries.css'
import { asyncGet } from '../../../Helpers/Xhr/Get'
import { BUDGET_HELPER_API_URL } from '../../../Constants'
import { makeToken, parseJson } from '../../../Helpers/Generic'

export class AccountSummaries extends React.Component {
    constructor(props) {
        super(props)
        this.state = {
            accounts: [],
            authorized: true,
            ...this.props
        }
    }

    componentDidMount() {
        try {           
            asyncGet(`${BUDGET_HELPER_API_URL}accounts?auth=${makeToken()}`).then(accounts => {
                if (JSON.parse(accounts).status === 200) {
                        this.setState({
                            accounts: parseJson(accounts),
                            authorized: true
                        })
                    })
                } else {
                    this.setState({
                        authorized: false
                    })
                }
            })
        } catch (ex) {
            console.log(ex)
        }
    }

    render() {
        const { accounts, authorized } = this.state
    }
}
  1. https://web.archive.org/web/20230128043222/https://x-team.com/blog/react-reactor-passwordless-spring/

Code samples:

  1. https://github.com/Thoughtscript/react_2021/tree/master/reactAppSrc/Components/Presentation
  2. https://github.com/Thoughtscript/x_team_wp_react/tree/master/xteamClient
  3. https://gitlab.com/Thoughtscript/budget_helper/-/tree/main/client

React: Hooks

Comparisons

  1. State :
    • Holds state in a single Component.
    • Doesn't persist data between Component Mounts.
    • Persists data between Rerenders, Renders.
    • Mutable.
    • Can be used as Hook but can be used in fully-qualified Class syntax.
  2. Reducer:
    • Can be reused across Components.
    • Doesn't persist data between Component Mounts.
    • Persists data between Rerenders, Renders.
    • Mutable.
    • Must be a Hook.
  3. Context:
    • Simplifies Prop-Drilling and allows top-level state to be shared with deeply nested child Components.
    • Persists and shares data between Rerenders, Renders.
    • Mutable.
    • Must be a Hook.
  4. Web Storage API (localStorage, sessionStorage) or Client-Side Database (lowdb, IndexedDB API):
    • Persists data between Rerenders, Renders, and Component Mounts / Component Unmounting.
    • Can be subscribed to using the Hook useSyncExternalStore.

State

One can now set State within a formerly Stateless Functional Component.

import React, { useState } from 'react'

export function App(props) {
  const [stateVar, setStateVar] = useState(0)

   return (
       <div>
            <button onClick={() => {
              setStateVar(stateVar + 1)
              console.log(stateVar); //0 1 2 3 ...

            }}><code>stateVar</code> Counter</button>
        </div>
  );
}

Note that stateVar must be initializated in useState.

Side Effects

Side Effects are run after a Components renders (hence, side-effect).

import React, { useEffect, useRef } from 'react'

export function App(props) {
    const myRef = useRef(null)

    useEffect(() => {
        myRef.current.style.color = "Red"
        myRef.current.className = "Example"
        console.log("side-effect") // side-effect
        console.log(myRef.current.className) // Example
    })

    return (
        <div>
            <h1 ref={myRef}>My Header</h1>
        </div>
    )
}

Useful to make succinct changes within Stateless Functional Components ("dummy components") post-render using terse/less verbose syntax (no Class constructor initialization, componentDidMount, etc.).

Reducers

Reducers are now supported out of the box and don't have to be associated with an underlying State Provider.

React continue to use this nomenclature to draw comparisons to the prior Map-Reduce naming convention (e.g. - converging sources to a single Reducer).

export const exampleStateInitialization = {
  id: 0,
  text: 'Hi!'
}

export function exampleReducer(state, action) {
  switch (action.type) {
    case 'action_type_a': {
      return {
        ...state,
        id: action.id,
        text: action.text,
      };
    }
    case 'action_type_b': {
      return {
        ...state,
        text: action.text,
      };
    }
    case 'action_type_b': {
      return {
        ...state,
        text: 'LOREM_IPSUM',
      };
    }
    default: {
      throw Error('Unknown action type: ' + action.type);
    }
  }
}
import React, { useReducer } from 'react'
import { exampleStateInitialization, exampleReducer } from './exampleReducer'

export function App(props) {
  // Define a Dispatcher function name to call the Reducer with a specific Event.
  const [exampleReducerState, exampleReducerDispatch] = useReducer(exampleReducer, exampleStateInitialization)
  return (
    <div>
      <h2 onClick = {(e) => {
        exampleReducerDispatch({
          id: 1,
          type: "action_type_b",
          text: "event_text"
        })
     }}>My Button Header</h2>
    </div>
  )
}

Context

import { createContext } from 'react'

// Create and export
export const ExampleContext = createContext("Me, Myself, and I")
import React, { createContext } from 'react'
import { ExampleComponent } from './ExampleComponent'
import { ExampleContext } from './ExampleContext'

// Import and wrap elments
export App(props) =>
  <div>
    <ExampleContext.Provider value={ "You and only you" }>
      <ExampleComponent />
    </ExampleContext.Provider>
  </div>
import React, { useContext } from 'react'
import { ExampleContext } from './ExampleContext'

export function ExampleComponent(props) {
  // Import and use the context  without passing specific Props everywhere and in-between!
  const exampleContextVal = useContext(ExampleContext)

  return (
    <div>
      <h1>{ exampleContextVal }</h1>
    </div>
  )
}

Both updating a React context value and making a React context updateable (from within a sub-Component) involves associating State with the Provider value:

export function App(props) {
  const [example, setExample] = useState("Me, Myself, and I")

  return (
    // Associate the state field here
    <ExampleContext.Provider value={example}>

      <Button onClick={() => { setExample("You and only you") }}>
        I'm Button Text
      </Button>

    </ExampleContext.Provider>
  )
}
  1. https://reactjs.org/docs/hooks-intro.html
  2. https://reactjs.org/docs/hooks-effect.html
  3. https://react.dev/learn/extracting-state-logic-into-a-reducer
  4. https://react.dev/learn/passing-data-deeply-with-context
  5. https://developer.mozilla.org/en-US/docs/Web/API/IndexedDB_API

Code samples:

  1. https://github.com/Thoughtscript/react_2021
  2. https://github.com/Thoughtscript/mearn_2024
  3. https://gitlab.com/Thoughtscript/region_risk_helper

React: Refs

There are two primary ways to create refs (a reference to a DOM Element that's accessible within a React Component):

  1. React.createRef() - primarily used when initializing refs within a React.Component Class constructor:

    import React from 'react'
    
    export default class App extends React.Component {
       constructor(props) {
          super(props);
          this.header = React.createRef();
       }
    
       render() {  
          console.log(this.header) 
          // current: <h1>My Header</h1>
    
          return (
             <div className="wrapper">
                <h1 ref={this.header} onClick={() => {
                   this.header.current.className = 'active'
                   this.header.current.innerText = "My New Header"
                   this.header.current.style.color = "Red"
    
                   console.log(this.header) 
                   // current: <h1 class="active" style="color: red;">My New Header</h1>
                }}>My Header</h1>
             </div>
          )
       }
    }
  2. const myRef = useRef(null); - useful to both access a DOM Element that's accessible within a React Component with an initialized value while keeping the value around.

    Use with Stateless Functional Components ("dummy components").

    import React, {useRef} from 'react';
    
    export default () => {
       const myRef = useRef(null)
    
       console.log(myRef)
       // current: <h1>My Header</h1>
    
       return (
          <div>
             <h1 ref={myRef}  onClick = {(e) => {
                myRef.current.innerHTML = "My New Header"
                myRef.current.style.color = "Red"
                console.log(myRef)
                // current: <h1 style="color: red;">My New Header</h1>
    
             }}>My Header</h1>
          </div>
       )
    }

Current

  1. Most refs will expose their attributes after the Component is rendered (e.g. - in a Side Effect or onClick).
  2. Use current to access the current Element State.
  1. https://reactjs.org/docs/react-api.html
  2. https://reactjs.org/docs/hooks-reference.html#useref

Code samples:

  1. https://gitlab.com/Thoughtscript/card_lookup_helper/-/blob/main/client/reactAppSrc/Components/Navigation/Menu/index.jsx

React: Parcel

Build-tool and configuration notes.

npm run build-parcel-prod
npm run build-parcel
cd dist
npx serve

One gotcha: parcel-bundler/parcel#7636. If you add to package.json:

  "engines": {
    "node": "=16.17.0"
  }

You'll get:@parcel/packager-js: External modules are not supported when building for browser. Remove the engines field.

By default, NPX and parcel will serve from: http://localhost:1234/

React: Redux

Overview

Redux provides asynchronous, multi-component, horizontal, State Management for complex single page React apps.

  1. Actions - defines the available operations on state storage.
  2. Container Component - binds Actions to Props. Call the bound Actions within the component to modify the Reducer state.
  3. Reducer - state storage.

Example

Actions:

'use strict'

/**
 *  Default actions for interacting with various Redux stores.
 *
 *  @Author - Adam InTae Gerard - https://www.linkedin.com/in/adamintaegerard/
 */

export const UNSAFE_SAVE = 'UNSAFE_SAVE'
export const REMOVE = 'REMOVE'
export const GET = 'GET'
export const CLEAR = 'CLEAR'
export const SAFE_SAVE = 'SAFE_SAVE'

//Use for public info containing no sensitive information
export const unsafeSave = v => {
  return {type: UNSAFE_SAVE, v}
}

//Use for secure or private info
export const safeSave = v => {
  return {type: SAFE_SAVE, v}
}

export const remove = v => {
  return {type: REMOVE, v}
}

export const get = v => {
  return {type: GET, v}
}

export const clear = v => {
  return {type: CLEAR, v}
}

Stateful Container Component:

'use strict'

/**
 *  Page Container.
 *
 *  @Author - Adam InTae Gerard - https://www.linkedin.com/in/adamintaegerard/
 */

import { connect } from 'react-redux'
import { Page } from './Page'
import { clear, get, remove, safeSave } from '../../../Redux/Shared/DefaultActions'

const mapStateToProps = state => {
  return {
    ...state
  }
}, mapDispatchToProps = dispatch => {
  return {
    save: (key, s) => {
      dispatch(safeSave({data: s, index: key}))
    },
    remove: key => {
      dispatch(remove({index: key}))
    },
    clear: () => {
      dispatch(clear())
    },
    get: key => {
      dispatch(get({index: key}))
    }
  }
}

export const PageContainer = connect(
  mapStateToProps,
  mapDispatchToProps
)(Page)

Reducer:

'use strict'

/**
 *  Encapsulated state storage Reducer.
 *
 *  @Author - Adam InTae Gerard - https://www.linkedin.com/in/adamintaegerard/
 */

import { CLEAR, GET, REMOVE, SAFE_SAVE } from '../Shared/DefaultActions'

let encapsulatedStateObj = {}

const set = (index, data) => {
  encapsulatedStateObj[index] = data
  return Object.assign({}, encapsulatedStateObj)
}, remove = index => {
  delete encapsulatedStateObj[index]
  return Object.assign({}, encapsulatedStateObj)
}, clear = () => {
  for (let i = 0; i < Object.getKeys(encapsulatedStateObj).length; i++) {
    remove(Object.getKeys(encapsulatedStateObj)[i])
  }
  return Object.assign({}, encapsulatedStateObj)
}

/**
 * Default supplied reducer.
 *
 * Caches into state.
 *
 * Partition by key in state.
 *
 * @param state
 * @param action
 * @returns {*}
 * @constructor
 */

export const SafeStorage = (state = encapsulatedStateObj, action) => {
  const type = action['type']
  switch (type) {
    case SAFE_SAVE:
      return set(action['v']['index'], action['v']['data'])
    case REMOVE:
      return remove(action['v']['index'])
    case GET:
      return Object.assign({}, encapsulatedStateObj[action['v']['index']])
    case CLEAR:
      return clear()
    default:
      return state
  }
}

Code samples:

  1. https://github.com/Thoughtscript/x_team_wp_react/tree/master/xteamClient/reactAppSrc

Angular: Services

Define an @Injectable():

import {Injectable} from '@angular/core'
import { HttpClient } from '@angular/common/http'

@Injectable()
export class EventService {
    constructor(private http: HttpClient) {  }

    getEvents() { return this.http.get('https://localhost:8888/api/events'); }
}

Use it as a dependency:

import { Component, OnInit } from "@angular/core"
import { EventService } from '../../services/event.service'

interface EventResponse {
    status: string
    data: string
}

interface Event {
    uuid: number
    name: string
    msg: string
}

@Component({
    selector: "event",
    template: `
    <main>
        <div id="flex-wrapper">
        <table>
            <thead>
                <tr>
                    <th>UUID</th>
                    <th>Name</th>
                    <th>Message</th>
                </tr>
            </thead>
            <tobdy>
                <tr *ngFor="let event of events">
                    <td>{{event.uuid}}</td>
                    <td>{{event.name}}</td>
                    <td>{{event.msg}}</td>
                </tr>
            </tobdy>
        </table>
        </div>
    </main>
  `,
    styles: []
})

export class EventComponent implements OnInit {
    //Make array to hold data
    public events: Event[] = [];

    //Inject the relevant service here
    constructor(private _eventService: EventService) {  }

    ngOnInit() { this.getEvents(); }

    getEvents() {
        // Casting response Objects
        this._eventService.getEvents().subscribe((res: Object) => {
            const E_R = res as EventResponse
            const ALL_EVENTS = JSON.parse(E_R.data) as Event[]
            const SLICED_EVENTS = ALL_EVENTS.slice(0, 5);
            this.events = SLICED_EVENTS;
            console.log(this.events);
        });
    }
}

Export the Module and Component:

import { NgModule } from "@angular/core"
import { CommonModule } from "@angular/common"
import { EventComponent } from "./event.component"

@NgModule({
    imports: [
        CommonModule
    ],
    declarations: [EventComponent],
    exports: [EventComponent]
})

export class EventModule { }
  1. https://web.archive.org/web/20240424062026/https://x-team.com/blog/quantum-computation-python-javascript/

Code samples:

  1. https://github.com/Thoughtscript/mearn_2024/blob/main/angular/src/app/services/event.service.ts
  2. https://github.com/Thoughtscript/mearn_2024/blob/main/angular/src/app/modules/events/event.component.ts

Angular: Routing

Provided that a Module and its Components are correctly Exported:

import { NgModule } from "@angular/core"
import { Routes, RouterModule } from "@angular/router"
import { HomeComponent } from "./modules/home/home.component"
import { EventComponent } from "./modules/events/event.component"

const routes: Routes = [
    {
        path: "",
        component: HomeComponent
    },
    {
        path: "event",
        component: EventComponent
    },
];

@NgModule({
    imports: [RouterModule.forRoot(routes)],
    exports: [RouterModule]
})

export class AppRoutingModule {}

Make sure to make Modules visible within the app itself:

import { NgModule } from '@angular/core';
import { BrowserModule } from '@angular/platform-browser';
import {HttpClientModule} from "@angular/common/http"

import { AppComponent } from './app.component';
import { EventComponent } from './modules/events/event.component';

import { AppRoutingModule } from "./app.routing";
import { NavModule } from "./modules/nav/nav.module";
import {EventService} from "./services/event.service";

@NgModule({
  declarations: [
    AppComponent,
    EventComponent
  ],
  imports: [
    BrowserModule,
    AppRoutingModule,
    NavModule,
    HttpClientModule
  ],
  providers: [EventService],
  bootstrap: [AppComponent]
})
export class AppModule { }
import {Component} from "@angular/core"

@Component({
    selector: "ng-root",
    template: `
        <div>
            <custom-header></custom-header>
            <router-outlet></router-outlet>
            <custom-footer></custom-footer>
        </div>
    `,
    styles: []
})

export class AppComponent {
}
  1. https://web.archive.org/web/20240424062026/https://x-team.com/blog/quantum-computation-python-javascript/

Code samples:

  1. https://github.com/Thoughtscript/mearn_2024/blob/main/angular/src/app/app.routing.ts
  2. https://github.com/Thoughtscript/mearn_2024/blob/main/angular/src/app/app.module.ts
  3. https://github.com/Thoughtscript/mearn_2024/blob/main/angular/src/app/app.component.ts

CSS: Specificity

Precedence and Priority

  1. Top to Bottom - the last, bottom-most, entries override previous entries with the same or lesser level of specificity.

    • div#divTestId will be blue given the following:

      /* orange */
      div#divTestId {
        color: orange;
      }
      
      /* Last entry - blue */
      div#divTestId {
        color: blue;
      }
  2. Specificity - more narrowly defined selectors take greater precedence over less narrowly defined ones.

    • div#divTestId is more specific than #divTestId although the two semantically refer to the same element.

    • Therefore, #divTestId will remain orange given the following:

      /* More specific - orange */
      div#divTestId {
        color: orange;
      }
      
      /* blue */
      #divTestId {
        color: blue;
      }
  3. The !important keyword - overrides the standard Precedence and Priority rules described above.

    • Elevates the Priority of an Element such that it can only be overridden by another !important CSS value.

Example

<!-- HTML -->
<div id="divTestId" class="divTestClass">
  <p id="pTestIdOne" class="pTestClass">
    text
  </p>
  <p id="pTestIdTwo" class="pTestClass">
    text
  </p>
</div>
/* CSS */

/* blue */
#divTestId {
  color: blue;
}

/* More specific - orange */
div#divTestId {
  color: orange;
}

/* More specific - green */
#divTestId.divTestClass {
  color: green;
}

/* More specific - pink */
div#divTestId.divTestClass {
  color: pink;
}

/* More specific - red */
div#divTestId.divTestClass p {
  color: red;
}

/* More specific - purple */
div#divTestId.divTestClass > p {
  color: purple;
}

/* less specific - purple */
div#divTestId p {
  color: red;
}

/* less specific - purple */
p#pTestIdTwo {
  color: blue;
}

/* more specific - purple and red*/
div#divTestId p#pTestIdTwo {
  color: red;
}

/* most specific - black and red*/
div#divTestId > p.pTestClass#pTestIdOne {
  color: black;
}

Rendered:

text

text

CSS: Techniques

Remove default padding and margins:

html, body {
    position: absolute;
    height: 100%;
    width: 100%;
    padding: 0;
    margin: 0;
    top: 0;
    left: 0;
}

Responsive Columns

<!-- HTML -->
<div class="flex-container">
    <div class="item">
        <h5>Example</h5>
    </div>
    <div class="item">
        <h5>Example</h5>
    </div>
        <div class="item">
        <h5>Example</h5>
    </div>
    <div class="item">
        <h5>Example</h5>
    </div>
        <div class="item">
        <h5>Example</h5>
    </div>
    <div class="item">
        <h5>Example</h5>
    </div>
</div>
// SCSS
.flex-container {
  display: flex;
  display: -webkit-flex;
  flex-wrap: wrap;
  width: 100%;

  & > .item {
    text-align: center;
    justify-content: center;
    align-content: center;
    width: 30%;
    border: 2px dashed gray;
    border-radius: 15px;
    padding: 5px;
    margin: 5px;
  }
}

Rendered:

Example
Example
Example
Example
Example
Example

Responsive Centering

<div class="wrapper-example">
  <h1>Example</h1>
</div>
div.wrapper-example {
    display: flex;
    display: -webkit-flex;
    flex-wrap: wrap;
    width: 100%;
}
div.wrapper-example > h1 {
  text-align: center;
  justify-content: center;
  align-content: center;
  width: 100%;
  border: 3px solid black;
  padding: 15px;
  border-radius: 15px;
}

Rendered:

Example

Scroll

::-webkit-scrollbar {
   width: 10px;
}
::-webkit-scrollbar-track {
  background: crimson;
}
::-webkit-scrollbar-thumb {
  background: orangered;
}
::-webkit-scrollbar-thumb:hover {
  background: orangered;
}

Input Text

<input id="exampleTextInput" type="text" placeholder="placeholder_text" />
input[type="text"]#exampleTextInput {
  width: 600px;
  padding: 20px;
  font-size: 20px;
  border-radius: 25px;
  color: turquoise;
  opacity: .55;
  margin: 15px;
}

input[type="text"]#exampleTextInput:focus {
    opacity: .8;
    outline: none;
}

input[type="text"]#exampleTextInput::placeholder {
  color: gray;
}

Rendered:

Markers

<ol>
  <li class="example">Example</li>
  <li class="example">Example</li>
  <li class="example">Example</li>
</ol>
ol > li.example::marker {
    color: orange;
}

Rendered:

  1. Example
  2. Example
  3. Example

Disable Text Selection

.disable-select {
    -webkit-touch-callout: none; /* iOS Safari */
    -webkit-user-select: none; /* Safari */
    -khtml-user-select: none; /* Konqueror */
    -moz-user-select: none; /* Old version of Firefox */
    -ms-user-select: none; /* Internet Explorer or Edge */
    user-select: none; /* All modern browsers */
}

Text Backgrounds

<div>
  I am a <span class="text-background">block of text</span>! Hooray me!
</div>
<div>
  I am another <span class="text-only">
  block of text</span>! Hooray <span class="text">again</span>!
</div>
<div>
  I am a last <span class="text-background border">
  block of text</span>! Hooray <span class="text">again</span>!
</div>
div > span.text-background {
  color: white;
  border-radius: 3px;
  border: 0px solid transparent;
  padding: 1.2px 7px 2px 5px;
  margin: 3px 0px;
  background: #b92b27;
  background: -webkit-linear-gradient(to right, #1565c0, #b92b27);
  background: linear-gradient(to right, #1565c0, #b92b27);
  width: fit-content;
}

div > span.text-background.border {
  border-bottom: 3px solid purple;
}

div > span.text-only {
  background: #b92b27;
  background: -webkit-linear-gradient(to right, #1565c0, #b92b27);
  background: linear-gradient(to right, #1565c0, #b92b27);
  /** Refer to: https://developer.mozilla.org/en-US/docs/Web/CSS/background-clip */
  background-clip: text;
  -webkit-background-clip: text;
  color: transparent;
}

Rendered:

I am a block of text! Hooray me!

I am another block of text! Hooray again!

I am a last block of text! Hooray again!

SVG Optimization

Consult: https://raygun.com/blog/improve-page-load-speed-svg-optimization/

Conditional Selector

div.css-conditional-example:has(> div.css-conditional-example) {
  color: white;
  background-color: black;
}
<div class="css-conditional-example">
  <div class="css-conditional-example">
      Example #1
  </div>

  Example #2
</div>

<div class="css-conditional-example">
    Example #3
</div>
Example #1
Example #2
Example #3
  1. https://aneze.com/how-to-disable-text-selection-highlighting-in-css
  2. https://developer.mozilla.org/en-US/docs/Web/CSS/background-clip
  3. https://raygun.com/blog/improve-page-load-speed-svg-optimization/

CSS: Pseudo-Classes

Pseudo-Classes provide additional filtering, querying, and narrowing power to standard Cascading Style Sheet Selectors beyond the innate ability to query for HTML id or class Attributes.

Several Pseudo-Classes involve specifying a distinct state that an Element might find itself in due to a triggering Event. For instance:

  1. :hover - triggered when a user hovers over an Element with their mouse
  2. :focus - triggered when a user clicks into an HTML Input Field

HTML Attribute Querying

Query by HTML Attribute:

[my_attr="my_value"] {
    /* */
}

/* With wildcard - any my_attr value matching my_value */
[my_attr*="my_value"] {
    /* */
}

Refer to: https://developer.mozilla.org/en-US/docs/Web/CSS/Attribute_selectors

Some Important Pseudo Classes

/* Triggered when a user hovers over an Anchor element */
a:hover {
    /* */
}

/* First child element of an Unordered List - probably a List element */
ul:first-child {
    /* */
}

/* Last child element of an Unordered List - probably a List element */
ul:last-child {
    /* */
}

Refer to: https://developer.mozilla.org/en-US/docs/Web/CSS/:active

CSS: Media Queries

Media Queries allow one to specify conditional Cascading Style Sheets, styles, or styling without requiring complicated JavaScript.

It's generally a good idea to place narrower Media Query conditions lower to take precedence over less narrowly defined conditions:

@media (max-width: 1850px) {
  #home {
    transform: scale(.70);
  }
}

@media (max-width: 1550px) {
  #home {
    transform: scale(.48);
  }
}

Examples:

max-height only:

@media (max-height: 850px){
    nav#toc-wrapper > ul#toc {
        height: 425px;
    }
}

max-width only:

@media (max-width: 1850px) {
  .container > h2 {
    font-size: 35px;
  }
}

@media (max-width: 550px) {
  .container > block {
    color: red;
  }
}

@media (max-width: 1050px){
    body>div#dune{display:none}
}

Conjoined conditions with Media Types and Media Features:

@media only screen and (orientation: landscape) and (max-width: 6500px) and (max-height: 600px) {
  .container > h2 {
    color: aqua;
  }
  .container > q {
    color: aqua;
  }
}

@media only screen and (max-width: 1000px) {
  .container > q {
    color: green;
  }
}

Refer to: https://developer.mozilla.org/en-US/docs/Web/CSS/@media#media_types for all Media Types and Media Features

  1. https://developer.mozilla.org/en-US/docs/Web/CSS/@media#media_types
  2. https://developer.mozilla.org/en-US/docs/Web/CSS/Media_Queries/Using_media_queries
  3. https://www.freecodecamp.org/news/css-media-queries-breakpoints-media-types-standard-resolutions-and-more/
  4. https://devfacts.com/media-queries-breakpoints-2022/

SQL: General Concepts

NOTE: A variety of SQL dialects are used in the examples below (MYSQL, Postgres, MSSQL, and SQLite)

CRUD

  1. CREATE - a resource is persisted and saved.
  2. READ - a persisted resource is scanned, read, or returned in an unmodified way.
  3. UPDATE - a persisted resource is updated.
  4. DELETE - a persisted resource is permanently or softly removed.

Corresponding Basic Operations

CREATE

CREATE statements are typified by the INSERT (Row) or CREATE (Table) keywords.

INSERT INTO PaySchedule (date, name) VALUES ('2023-02-15', 'frank');
DROP TABLE IF EXISTS "Rules";

CREATE TABLE IF NOT EXISTS "Rules" (
    "id"    INTEGER NOT NULL UNIQUE,
    "description"    TEXT,
    "category"    TEXT,
    "rule"    TEXT,
    PRIMARY KEY("id" AUTOINCREMENT)
);

READ

READ statements are typified by the presence of the SELECT keyword.

SELECT * FROM Accounts WHERE active =  1 AND status = 'Active';

SELECT * FROM Hobbies;

SELECT x.newhome + y.newhome + z.newhome + w.newhome + q.newhome
FROM 
  (SELECT SUM(unitcost * quantity) as newhome FROM House WHERE quantity > 0 AND fha = 1) AS x,
  (SELECT SUM(unitcost * quantity) as newhome FROM UpkeepHobbies WHERE quantity > 0 AND fha = 1) AS y,
  (SELECT SUM(unitcost * quantity) as newhome FROM UpkeepClothing WHERE quantity > 0 AND fha = 1) AS z,
  (SELECT SUM(unitcost * quantity) as newhome FROM UpkeepElectronics WHERE quantity > 0 AND fha = 1) AS w,
  (SELECT SUM(unitcost * quantity) as newhome FROM UpkeepKitchen WHERE quantity > 0 AND fha = 1) AS q;

SELECT * FROM MonthlyCosts WHERE choicegroup LIKE "%ALL%";

SELECT COUNT(*) FROM (SELECT grouping FROM MonthlyCostsGroceries GROUP BY grouping);

SELECT y.total / (x.mtbf * 12) AS upkeep FROM 
  (SELECT AVG(mtbf) AS mtbf FROM Upkeep WHERE quantity > 0 AND choicegroup LIKE "%ALL%") AS x, 
  (SELECT SUM(unitcost * quantity) AS total FROM Upkeep WHERE choicegroup LIKE "%ALL%") AS y;

UPDATE

UPDATE statements are typified by the presence of the UPDATE (ROW) or ALTER (TABLE) keyword.

UPDATE Assets SET value = 400 WHERE id = 4 AND account = 8;

UPDATE Demographics SET personal = 750, success = 1, updated = '2020-01-01' WHERE id = 100;
ALTER TABLE example ADD more_text VARCHAR(45);

DELETE

DELETE statements are typified by the presence of the DELETE keyword.

DELETE FROM PaySchedule WHERE date < '2023-02-15' OR date = '2024-02-15';

SQL: ACID

Atomicity guarantees that operations are completely contained in a single unit (Transaction). It either succeeds completely or fails (such that all operations fail – it is not partially successful).

Consistency guarantees that each Transaction moves the database state from one valid, legal, state to another valid, legal, state.

Isolation guarantees that concurrent Transactions are disjoint and separate, and that they don’t overwrite each other.

Durability guarantees that persisted data remains persisted across time and system failure/disaster.

SQL: Joins

Retrieving data that's associated by field.

OUTER - matches and NULL values where absent depending on the specifics of the JOIN.

INNER - only what's matched in both Tables.

LEFT, RIGHT, INNER, FULL

  1. LEFT (OUTER) JOIN: Returns all records from the left Table, the matched records from the right Table, and NULL in any Row absent from the right Table .
  2. RIGHT (OUTER) JOIN: Returns all records from the right Table, the matched records from the left Table, and NULL in any Row absent from the left Table.
  3. (INNER) JOIN: Returns records that have matching values in both Tables. Equivalent to: SELECT * FROM table_one, table_two WHERE table_one.id = table_two.id
  4. FULL (OUTER) JOIN: Returns all records when there is a match in either left or right Table, NULL in any Row absent from one of the two Tables.
-- Multi JOIN
SELECT a.name as aname, b.description as name, a.description as adesc, s.name as description, b.value 
FROM Balances AS b 
LEFT JOIN Accounts AS a ON b.account = a.id 
JOIN SummaryTypes AS s ON b.purpose = s.id 
WHERE a.active = 1 AND a.status='Active' AND b.active = 1 
ORDER BY value DESC;

-- INNER JOIN
SELECT * 
FROM example AS e
INNER JOIN employee AS em ON e.id = em.id;

CROSS

  1. Produces the Cartesian Product of two Tables and their Rows.
  2. Every Row in Table A is combined with every Row in Table B.

Self Joins

A JOIN between a Table and itself.

  1. https://leetcode.com/submissions/detail/816099738/
  2. https://leetcode.com/submissions/detail/816856989/

Explicit vs Implicit

Consider the following:

--- Simple JOIN
SELECT * FROM A, B WHERE A.id = B.a_id;

--- Explicit JOIN
SELECT * FROM A JOIN B ON A.id = B.a_id;
  1. The two statements are very nearly functionally identical.
  2. Although there are some differences in terms of the leeway that the Execution Planner has when computing an optimal SQL Plan.

JOIN Order

The order of JOIN statements is typically relevant for any Explicitly Joined statement

  1. SQL optimizers will refine most simple (Implicit) JOINS and have the most leeway in computing an optimal SQL Plan.
  2. SQL optimizers will refine most INNER JOINS through a computed Execution Plan.
  3. However, OUTER JOINS will typically preserve the order specified by the statement.

Generally speaking, remember to keep the leftmost tables to the least number of expected rows.

For instance, if one is joining three Tables A, B, C such that:

SQL: Techniques

Some common SQL techniques.

DROP TABLE Check

To create a fresh, empty, new Table drop any existing such Table first OR to prevent accidently overriding an existing Table:

DROP TABLE IF EXISTS "Rules";

CREATE TABLE IF NOT EXISTS "Rules" (
    "id"    INTEGER NOT NULL UNIQUE,
    "description"    TEXT,
    "category"    TEXT,
    "rule"    TEXT,
    PRIMARY KEY("id" AUTOINCREMENT)
);

SELF JOIN

Occassionally, a TABLE will represent multiple entity-kinds (say a PLANT TABLE with ROWS representing both seedlings and their parent plants):

SELECT e1.name AS Plants
FROM Plants AS e1
JOIN Plants AS e2
ON e1.parent = e2.id
AND e1.sproutdate > e2.sproutdate;

Refer to: https://leetcode.com/problems/employees-earning-more-than-their-managers/description/

GROUP BY

GROUP BY is used to condense multiple ROWS with the same field into a single result in the RECORD SET. GROUP BY can be further refined by an accompanying HAVING clause (which acts like a WHERE clause for the specified GROUP). Importantly, in many dialects of SQL, GROUP BY can only be used with Aggregate Functions (ROUND, AVG, COUNT, SUM, MIN, MAX, etc.).

NOTE: HAVING conditions are deferred and apply to the results of a GROUP BY clause and are therefore often faster than their equivalent WHERE clause (which applies the filtering condition against the entire TABLE).

SELECT Name FROM STUDENTS GROUP BY Name, Marks, ID HAVING Marks > 75 ORDER BY Right(Name, 3) ASC, ID ASC;

SELECT e.id, ROUND(AVG(e.account), 2) AS averageAccountBalance
FROM Example AS e
JOIN Examples AS ee
ON e.id = ee.id
GROUP BY e.id, ee.name
HAVING e.account > 750
ORDER BY e.id ASC, averageAccountBalance DESC

NOTE: a double GROUP BY (e.g. - GROUP BY x.id, x.name) can be used to deduplicate associated multi-row RECORD SETS (in say a complex multi-JOIN).

Refer to: https://www.hackerrank.com/challenges/more-than-75-marks/problem

Chained SELECT Statements

SELECT DISTINCT CITY FROM STATION 
    WHERE (CITY LIKE 'A%'
    OR  CITY LIKE 'E%'
    OR  CITY LIKE 'I%'
    OR  CITY LIKE 'O%'
    OR  CITY LIKE 'U%')
    AND CITY IN (

SELECT DISTINCT CITY FROM STATION 
    WHERE CITY LIKE '%a'
    OR  CITY LIKE '%e'
    OR  CITY LIKE '%i'
    OR  CITY LIKE '%o'
    OR  CITY LIKE '%u')

Refer to: https://www.hackerrank.com/challenges/more-than-75-marks/problem

WITH

WITH x AS (
  SELECT months * salary AS earnings, id
  FROM employee
);

SELECT TOP 1 earnings, Count(id)
FROM x
GROUP BY earnings
ORDER BY earnings DESC;
  1. https://leetcode.com/problems/employees-earning-more-than-their-managers/description/
  2. https://www.hackerrank.com/challenges/more-than-75-marks/problem
  3. https://www.hackerrank.com/challenges/more-than-75-marks/problem
  4. https://www.geeksforgeeks.org/difference-between-where-and-group-by/
  5. https://stackoverflow.com/questions/49758446/where-vs-having-performance-with-group-by

SQL: NULL

COALESCE

  1. SUM returns NULL not 0 if no values exist that meet the query conditions.
  2. Use COALESCE(SUM(table.column),0)
  3. Also try a different JOIN if a zero summed value fails to appear.
  4. For example, a LEFT JOIN with COALESCE(SUM(table.column),0)

NULL

  1. Use IS NOT NULL vs <> NULL

Postgres: General Concepts

Advantages

  1. Plugin support.
  2. Full support for NoSQL-like JSON in queries, rows, and columns.
  3. Less vulnerable to data corruption (preference for data Consistency/integrity over performance).
  4. dblink now provides remote-database querying/connections.
  5. Non-blocking index creation (CONCURRENTLY).
  6. Postgres Multi-Version Concurrency Control (MVCC) reading never block writing and vice-versa. Also, see the article on ACID.

Disadvantages

  1. Deprecated: No cross-database querying (a decisive factor for many database systems at scale: MySQL was a top choice for that reason) prior to 8.2.
  2. Deprecated: Slightly slower than MySQL (using the older MyISAM engine - a decisive factor for many database systems at scale: MySQL was a top choice for that reason) for READ and transaction-heavy workloads.
  1. https://developer.okta.com/blog/2019/07/19/mysql-vs-postgres
  2. https://www.cybertec-postgresql.com/en/joining-data-from-multiple-postgres-databases/
  3. https://www.postgresql.org/docs/current/dblink.html
  4. https://www.postgresql.org/docs/7.1/mvcc.html
  5. https://www.postgresql.org/docs/12/sql-reindex.html

Postgres: Indexes and Views

  1. Indexes - used to improve query performance within a Table - Indexes a Table by one or more Columns.
  2. Views - a logical representation of a Table or Tables.

Indexes

Postgres supports:

  1. Hash indexes and B-Tree indexes
  2. Partial Indexes support conditioned indexing: CREATE INDEX CONCURRENTLY my_index ON my_table (column1_name) WHERE amount > 0;
  3. Concurrent non-blocking indexing: CREATE INDEX CONCURRENTLY my_index ON my_table (column1_name, column2_name);

Implicit Indexes are automatically created for any Primary Key on a Table by default.

Materialized Views

Essentially a cached Table that stores the results of a query:

DROP MATERIALIZED VIEW my_view_name;

CREATE MATERIALIZED VIEW my_view_name
AS
    SELECT * FROM example;
    --- Assume column 'name'

That can be queried itself:

SELECT * FROM my_view_name;

And that can be refreshed:

REFRESH MATERIALIZED VIEW my_view_name;

Concurrent refresh:

CREATE UNIQUE INDEX my_index ON my_view_name (name);

REFRESH MATERIALIZED VIEW CONCURRENTLY my_view_name;
  1. https://www.postgresql.org/docs/current/indexes-partial.html
  2. https://www.postgresqltutorial.com/postgresql-views/postgresql-materialized-views/
  3. https://www.postgresql.org/docs/current/rules-materializedviews.html

Postgres: JSON

Postgres supports both JSON and JSONB data types:

  1. JSON is stored in a standard String format.
  2. JSONB is a more performant format that uses binary (hence, the "b") to improve indexing and querying at the expense of more complex Serialization.

Operators

Given a table:

id json_col json_array_col jsonb_col jsonb_array_col
1 "[1,2,3]" "[{"id": 0, "name": "a"},{"id": 1, "name": "a"},{"id": 2, "name": "c"}]" [1,2,3]b [{"id": 0, "name": "a"},{"id": 1, "name": "a"},{"id": 2, "name": "c"}]b
  1. -> - allows one to query into a JSON field shared by all rows in a column (say json_col). E.g. - jsonb_col -> 'name'
  2. ->> - extracts the JSON value at the specified index (numeric) or the value at the specified key. E.g. - json_col ->> 2
  3. ::int, ::json, etc. - since JSON fields lack a Postgres type, use :: to cast the value to a type.

Examples

To initialize an example Postgres table:

DROP TABLE IF EXISTS example;

CREATE TABLE example (
  id INT,
  json_col JSON,
  json_array_col JSON,
  jsonb_col JSONB,
  jsonb_array_col JSONB
);

-- Insert values into table.

INSERT INTO example VALUES (1,
  '[1,2,3]'::json,
  '[{"id": 0, "name": "a"},{"id": 1, "name": "a"},{"id": 2, "name": "c"}]'::json,
  '[1,2,3]'::jsonb,
  '[{"id": 0, "name": "a"},{"id": 1, "name": "a"},{"id": 2, "name": "c"}]'::jsonb
);

Use the following queries to retrieve the desired JSON data:

-- queries

SELECT * FROM example;

-- insert via json

INSERT INTO example VALUES (2,
  '[1,2,3]'::json,
  '[{"id": 0, "name": "a"},{"id": 1, "name": "a"},{"id": 2, "name": "c"}]'::json,
  '[1,2,3]'::jsonb,
  '[{"id": 0, "name": "a"},{"id": 1, "name": "a"},{"id": 2, "name": "c"}]'::jsonb
);

INSERT INTO example
SELECT id, json_col, json_array_col, jsonb_col, jsonb_array_col
FROM json_populate_record (NULL::example,
    '{
      "id": 3,
      "json_col": {"name": "bob", "age": 111},
      "json_array_col": [{"id": 0, "name": "a"},{"id": 1, "name": "a"},{"id": 2, "name": "c"}],
      "jsonb_col": {"name": "sarah", "age": 2222},
      "jsonb_array_col": [{"id": 0, "name": "a"},{"id": 1, "name": "a"},{"id": 2, "name": "c"}]
    }'
);

-- query into json array

SELECT arr -> 'id' AS json_id, arr -> 'name' AS json_name
FROM example e, json_array_elements(e.json_array_col) arr
WHERE (arr ->> 'id')::int > -1;

-- query json column

SELECT json_col::json ->> 2 FROM example;

SELECT json_col -> 'age' FROM example;

SELECT json_col -> 'age' AS json_age FROM example WHERE (json_col ->> 'age')::int = 111;

-- query into jsonb array

SELECT arr -> 'id' AS json_id, arr -> 'name' AS json_name
FROM example e, jsonb_array_elements(e.jsonb_array_col) arr
WHERE (arr ->> 'id')::int > -1;

-- query jsonb column

SELECT jsonb_col::json ->> 2 FROM example;

SELECT jsonb_col -> 'age' FROM example;

SELECT jsonb_col -> 'name' AS jsonb_name, jsonb_col -> 'age' AS jsonb_age
FROM example WHERE (jsonb_col ->> 'name') = 'sarah';
  1. https://github.com/Thoughtscript/postgres_json_practice
  2. https://www.postgresql.org/docs/current/functions-json.html

Code samples:

  1. https://github.com/Thoughtscript/postgres_json_practice

Git: Quick Reference Sheet

Some common and useful Git commands.

Checkout New Branch From Origin

  1. git fetch origin my_branch
  2. git pull
  3. git checkout my_branch

View Branch Commit History

  1. git log

View Entire Local Change History

  1. git reflog

View File Changes

By SHA:

  1. git diff --name-only af43c41d..HEAD
  2. git diff af43c41d..master

By branch:

  1. git diff --name-only origin/deploy..master
  2. git diff origin/deploy..master

Correct Previous Commits

Review, alter, remove, amend last 3 commits:

  1. git rebase -i HEAD~3
  2. Type i to enter interactive mode.
  3. Find the line with the desired commit hash. Modify it using pick, drop, etc.
  4. Hit the esc button to exit interactive mode.
  5. Type wq to save and close the file (Git will proceed through the stipulated changes) or type q! to close the file abandoning all changes.
  6. git push -f to override previous changes - do not use this on master/main only ever within a development branch.

Git Amend

Correct the last commit message:

  1. git commit --amend -m "Your new message"

Discard Uncommitted Branch Charge

  1. git clean -f -d
  2. git reset --hard HEAD

Abandon a Rebase

  1. git rebase --abort
  2. git clean -f -d
  3. git reset --hard HEAD

Change Branch W/ Same Name As Dir

If a branch contains a dir in the root with the same name as a branch, Git will now complain.

Use the following instead:

  1. git fetch origin deploy (if freshly cloned)
  2. git switch -f deploy

Set Environment Config

  1. git config --global user.username "my_username"
  2. git config --global user.email "my_email@email.com"

Disable automatic conversion of Unix line endings to Dos/Windows ones (on Windows):

  1. git config --global core.autocrlf false
  1. https://devhints.io/vim
  2. https://git-scm.com/doc
  3. https://toolslick.com/conversion/text/dos-to-unix
  4. https://stackoverflow.com/questions/1967370/git-replacing-lf-with-crlf
  5. https://docs.github.com/en/get-started/getting-started-with-git/configuring-git-to-handle-line-endings#global-settings-for-line-endings

GitHub Actions: Overview

GitHub Actions (the tool itself) supports Workflow and CI/CD automation through GitHub.

Official Documentation: https://docs.github.com/en/actions/about-github-actions/understanding-github-actions

Very helpful exercises: https://learn.microsoft.com/en-us/collections/n5p4a5z7keznp5

GitHub Actions

GitHub Actions are packaged scripts to automate tasks through GitHub.

There are three kinds of GitHub Actions:

  1. Container Actions - where a Linux Environment comprises part of the Action.

     # Example
     name: "Hello Actions"
     description: "Greet someone"
     author: "octocat@github.com"
    
     inputs:
         MY_NAME:
             description: "Who to greet"
             required: true
             default: "World"
    
     runs:
         uses: "docker"
         image: "Dockerfile"
    
     branding:
         icon: "mic"
         color: "purple"

    https://github.com/Thoughtscript/example-container-action

    https://docs.github.com/en/actions/sharing-automations/creating-actions/creating-a-docker-container-action

  2. JavaScript Actions - execute JavaScript as an Action.

     name: 'Hello World'
     description: Simple example
    
     inputs:
       myinput:  # id of input
         description: My input arg
         required: true
         default: "I am a string"
    
     outputs:
       myoutput: # id of output
         description: Output of the function
    
     runs:
       using: node20
       main: script.js

    https://github.com/Thoughtscript/example-js-action

    https://docs.github.com/en/actions/sharing-automations/creating-actions/creating-a-javascript-action

  3. Composite Actions - combine multiple Workflow Steps together into one Action.

    name: 'Hello World'
    description: 'Greet someone'
    
    inputs:
      who-to-greet:  # id of input
        description: 'Who to greet'
        required: true
        default: 'World'
    
    runs:
      using: "composite"
      steps:
        - name: Set Greeting
          run: echo "Hello $INPUT_WHO_TO_GREET."
          shell: bash
          env:
            INPUT_WHO_TO_GREET: ${{ inputs.who-to-greet }}
    
        # ...
    
        - name: Run goodbye.sh
          run: goodbye.sh
          shell: bash

https://github.com/Thoughtscript/example-composite-action

The above are characterized by having:

  1. inputs and/or outputs
  2. runs and using

GitHub Workflow

name: A workflow for my Hello World file
on: push
  jobs:
    build:
      name: Hello world action
      runs-on: ubuntu-latest
      steps:
        - uses: actions/checkout@v1
        - uses: ./action-a
          with:
            MY_NAME: "Mona"

https://github.com/Thoughtscript/example-workflow

https://docs.github.com/en/actions/sharing-automations/creating-actions/creating-a-composite-action

The Anatomy of a GitHub Action

Workflow > Job(s) > Step(s) > Action(s) defined in a YAML file.

  1. A Workflow defines one or more Jobs.
  2. A Job defines one or more Steps.
    • A Job has an associated Runner that executes the Job.
    • (Think Runnable or Callable in Java.)
  3. A Step defines one or more Actions.
    • A Task with multiple commands.
  4. An Action is a discrete command.
    • (Think RUN in Docker.)

GitHub Integration

Organizations and users typically integrate their GitHub Repositories with GitHub Actions:

  1. Define a workflow.yaml file in the root of some Source Code.
  2. The Source Code is checked into a GitHub Repository.
  3. The GitHub Repository is associated with GitHub Secrets or any integrations through the GitHub User Interface.
  1. https://docs.github.com/en/actions/about-github-actions/understanding-github-actions
  2. https://docs.github.com/en/actions/sharing-automations/creating-actions/creating-a-docker-container-action
  3. https://docs.github.com/en/actions/sharing-automations/creating-actions/creating-a-javascript-action
  4. https://docs.github.com/en/actions/sharing-automations/creating-actions/creating-a-composite-action
  5. https://learn.microsoft.com/en-us/collections/n5p4a5z7keznp5

Code samples:

  1. https://github.com/Thoughtscript/example-workflow
  2. https://github.com/Thoughtscript/example-js-action
  3. https://github.com/Thoughtscript/example-container-action
  4. https://github.com/Thoughtscript/example-composite-action

GitHub Actions: Advanced Topics

Triggers

Reference list for available trigger conditions: https://docs.github.com/en/actions/writing-workflows/choosing-when-your-workflow-runs/events-that-trigger-workflows.

Use like so:

name:
on:
  issues:
    types: [opened, edited, milestoned]

https://docs.github.com/en/actions/writing-workflows/choosing-when-your-workflow-runs/triggering-a-workflow

Combining Actions

GitHub Actions can also be combined or composed.

Typified by a uses YAML black:

name: my_example
on:
  #...

jobs:
  tag:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v4
    - name: Run test
      run: |
        pytest test.py

https://github.com/actions

Environment Variables

Can define Environment Variables that can be used elsewhere in the Workflow:

#...
env:
  AWS_REGION: MY_AWS_REGION              
  ECR_REPOSITORY: MY_ECR_REPOSITORY           
  ECS_SERVICE: MY_ECS_SERVICE                
#...

Default Environment Variables:

  1. Are prefixed with GITHUB_.
  2. Defined by GitHub and not within a Workflow.
  3. Have an associated Context property.

Default Environment Variables: https://docs.github.com/en/actions/writing-workflows/choosing-what-your-workflow-does/store-information-in-variables#default-environment-variables

Advanced Expressions

GitHub Actions supports many complex Expressions, Operators, and Functions (as YAML keys or values, depending):

  1. Numeric Boolean: <=, >=, ==, !=, etc.
  2. Literals: ${{ 'I''m a string and I need tic marks around me in here!' }}, ${{ -9.2 }}
  3. Logical Boolean: &&, ||, !, etc.
  4. YAML Conditional Boolean: if: ${{ success() }}, etc.
  5. String: contains('Hello world', 'llo'), etc.
  6. Parsing: toJSON(value), etc.
  7. Dynamic Variable Setting: >>

https://docs.github.com/en/actions/writing-workflows/choosing-what-your-workflow-does/evaluate-expressions-in-workflows-and-actions

  1. https://docs.github.com/en/actions/writing-workflows/choosing-when-your-workflow-runs/triggering-a-workflow
  2. https://docs.github.com/en/actions/writing-workflows/choosing-when-your-workflow-runs/events-that-trigger-workflows
  3. https://docs.github.com/en/actions/writing-workflows/choosing-what-your-workflow-does/store-information-in-variables#default-environment-variables
  4. https://docs.github.com/en/actions/writing-workflows/choosing-what-your-workflow-does/evaluate-expressions-in-workflows-and-actions

GitHub Actions: Integration

GitHub Secrets

GitHub Actions can integrate with GitHub Secrets to define any Secrets, Credentials, or Tokens required by the CI/CD or Workflow:

  1. These are defined in the GitHub User Interface available through Settings > Secrets and variables > Actions > Actions secrets and variables.
    • e.g. - https://github.com/Thoughtscript/carbon_offsets/settings/secrets/actions
    • These are not to be confused with Codespaces Secrets!
  2. And passed as values into YAML like so: ${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }}.

Some services handle the actual retrieval, refreshing, and obtaining of Tokens through the above.

Many Cloud Providers offer prepublished GitHub Actions that perform certain operations (such as logging into AWS) that can be used in a uses block.

In other cases one may need to define a command that calls some say OAUTH 2.0 REST API and stores the dynamic token (using >>) before making subsequent calls (per usual token auth flows):

name: Create issue on commit

on: [ push ]

jobs:
  my_issue:
    runs-on: ubuntu-latest
    # ...
    steps:
      - name: Create issue using REST API
        run: |
          curl --request POST \
          --url https://myoauthserverendpoint \ ...

https://docs.github.com/en/actions/security-for-github-actions/security-guides/using-secrets-in-github-actions

Terraform

Terraform HCP can integrate with GitHub Actions (and with Terraform Cloud Providers).

https://developer.hashicorp.com/terraform/tutorials/automation/github-actions

AWS

https://aws.amazon.com/blogs/devops/integrating-with-github-actions-ci-cd-pipeline-to-deploy-a-web-app-to-amazon-ec2/

Azure

jobs:
  build:
    runs-on: ubuntu-latest

    steps:
      # Checkout the repo
      - uses: actions/checkout@main
      - uses: azure/login@v1
        with:
          client-id: ${{ secrets.AZURE_CLIENT_ID }}
          tenant-id: ${{ secrets.AZURE_TENANT_ID }}
          subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
# ...

https://learn.microsoft.com/en-us/azure/app-service/deploy-github-actions?tabs=openid%2Caspnetcore

  1. https://docs.github.com/en/actions/security-for-github-actions/security-guides/using-secrets-in-github-actions
  2. https://aws.amazon.com/blogs/devops/integrating-with-github-actions-ci-cd-pipeline-to-deploy-a-web-app-to-amazon-ec2/
  3. https://learn.microsoft.com/en-us/azure/app-service/deploy-github-actions?tabs=openid%2Caspnetcore

GitHub Actions: Enterprise

GitHub Actions supports many features for Enterprise operations.

Templates

  1. GitHub Actions Templates can be defined to encourage standards and reuse.
  2. These are similar to GitHub Pull Request Templates.
  3. These are basically prepopulated but blank YAML files that can be used as a starting point.

https://docs.github.com/en/actions/writing-workflows/using-workflow-templates

Organization Policies

  1. GitHub Actions Policies can be defined to restrict who can do what.
  2. These are similar to GitHub Organizational Policies.

https://docs.github.com/en/enterprise-cloud@latest/admin/enforcing-policies/enforcing-policies-for-your-enterprise/enforcing-policies-for-github-actions-in-your-enterprise

  1. https://docs.github.com/en/actions/writing-workflows/using-workflow-templates
  2. https://docs.github.com/en/enterprise-cloud@latest/admin/managing-github-actions-for-your-enterprise/getting-started-with-github-actions-for-your-enterprise/introducing-github-actions-to-your-enterprise
  3. https://docs.github.com/en/enterprise-cloud@latest/admin/enforcing-policies/enforcing-policies-for-your-enterprise/enforcing-policies-for-github-actions-in-your-enterprise

GitHub Actions: Misc.

Misc. study items:

  1. [skip ci], [ci skip], [no skip], [skip actions], [actions skip]
  2. Use | for a multiline string (to run multiple commands in a single step - not &&)
  3. Default permission levels that can be assigned to GITHUB_TOKEN: none,write,read.
  4. Multiple jobs will run in parallel by default.
  5. needs keyword specifies that one job requires another.
  6. $ vs ${{ ... }}
     runs:
       using: "composite"
       steps:
         - name: Set Greeting
           # Use the ENV value
           run: echo "Hello $INPUT_WHO_TO_GREET."
           shell: bash
           env:
             # Set the ENV value
             INPUT_WHO_TO_GREET: ${{ inputs.who-to-greet }}
  7. Debugging syntax from within a Step: echo "::debug::Set the Octocat variable".
  8. OIDC is recommended for security hardening.
  9. Workflow triggering events:
  10. Status Check Functions:
  11. steps.<step_id>.outcome
  12. Branch Filters use Glob Patterns:
  13. Disabling

Engineering: HTTP and SSL

  1. REST - Representational State Transfer (read below).
  2. HTTP - Hypertext Transfer Protocol - often equated with REST. The basis for the World Wide Web typified by Request/Response Objects, Headers, Parameters, HTTP Methods, and Sessions.
  3. SSL - TLS - HTTPS - Hypertext Transfer Protocol with Security.

Certificates

  1. Certificate Authority - validates a Certificate (say, X.509).
  2. Public and Private Keys - a kind of Symmetric Encryption that bifurcates Credentials into two pieces.

REST

REST (Representational State Transfer) - a client-server architectural paradigm that encourages the following core precepts:

  1. Client-Server - the familiar Client to Server paradigm. A Client connects to a Server.
  2. Statelessness - user Session data isn't stored or persisted between Sessions.
  3. Layered System - read Service Layer Architecture article.
  4. Uniform Interface - common data types, HTTP Methods, etc.

Methods:

  1. OPTIONS - indicates that a Client is seeking a Server and further HTTP Request information (required HTTP Headers, etc.).
  2. GET - indicates that a Client will READ a persisted resource.
  3. POST - indicates that a Client will CREATE a persisted resource.
  4. PUT - indicates that a Client will UPDATE a persisted resource.
  5. PATCH - indicates that a Client partially UPDATE a persisted resource.
  6. DELETE - indicates that a Client intends to DELETE a persisted resource.

Request / Response Lifecycle

The HTTP Request/Response Lifecycle:

  1. Request Object -
    1. Headers - specifies the Content-Type, CORS information, Credentials or Tokens, etc. (sent back from the Web Client).
    2. HTTP Method - OPTIONS, GET, POST, PUT, PATCH, DELETE .
    3. URL (Uniform Resource Locater) - the IP Address with optional Port number, DNS address, etc.
    4. Parameters (optional) - specify query filters to narrow down some query result: id, page, etc.
    5. Body (optional) - submitted or requested information can be regimented as JSON, text, a form, etc., and encapsulated by the Request and Response Objects.
  2. Response Object -
    1. Headers - specifies the Content-Type, CORS information, Credentials or Tokens, etc. of the Response Object (sent back from the Web Server).
    2. Body (optional) - submitted or requested information can be regimented as JSON, text, a form, etc., and encapsulated by the Request and Response Objects.

Engineering: TCP, UDP, IP, DNS

  1. IP - Internet Protocol Address

    • IPv6

      • Hexadecimal
      • 128-Bit IP Address
      • Eight groups of four hexadecimal digits
      • Example: 2031:0ca7:75×3:13f3:0103:8b1e:0310:7532
    • IPv4

      • Purely numeric (decimal)
      • 32-Bit IP Address
      • Four octets in the inclusive range 0-255
      • Example: 127.0.0.1
    • CIDR - Classless Inter-Domain Routing

      • Solution for allocating IP Addresses and IP Routing.

      • The convention <MY_IP_ADDRESS>/<CIDR_LENGTH> (e.g. - 10.0.0.0/16) specifies the CIDR Length for the IP Address (Prefix).

      • The CIDR Length defines the number of bits reserved for the Netmask.

        • The Netmask divides an IP Address into sections akin to dividing a Zip Code into Street Addresses.
        • Used to specify which part of the Host IP Address space is reserved for the Host number and which is reserved the Subnet number.
        • The lower the CIDR Length, the more IP Addresses are available within the network.
      • Classes and their respective Netmasks:

        • Class A - 255.0.0.0
        • Class B - 255.255.0.0
        • Class C - 255.255.255.0
      • CIDR Notation Host Formula Available Hosts
        /8 232-8 - 2 16,777,214
        /9 232-9 - 2 8,388,606
        /10 232-10 - 2 4,194,302
        /11 232-11 - 2 2,097,150
        /12 232-12 - 2 1,048,574
        /13 232-13 - 2 524,286
        /14 232-14 - 2 262,142
        /15 232-15 - 2 131,070
        /16 232-16 - 2 65,534
        /17 232-17 - 2 32,766
        /18 232-18 - 2 16,382
        /19 232-19 - 2 8,190
        /20 232-20 - 2 4,094
        /21 232-21 - 2 2,046
        /22 232-22 - 2 1,022
        /23 232-23 - 2 510
        /24 232-24 - 2 254
        /25 232-25 - 2 126
        /26 232-26 - 2 62
        /27 232-27 - 2 30
        /28 232-28 - 2 14
        /29 232-29 - 2 6
        /30 232-30 - 2 2

        (From: https://erikberg.com/notes/networks.html)

  2. DNS - Domain Name Service - associates a (purely) numeric IP Address (127.0.0.1) with a human readable/friendly Domain Name (localhost).

  3. TCP - Transmission Control Protocol - a Transport Layer protocol that complements IP (TCP/IP). Is used to negotiate and determine connections themselves prior to data being transmitted.

  4. UDP - User Datagram Protocol - responsible for actually sending packets and messages over TCP/IP. One-directional, requires some other system to actually manage the connections themselves. (Although it can be used without establishing or verifying connections.)

  5. HTTP/2 - introduces Huffman Encoding to compress packet sizes when initially negotiating an HTTP Request.

  6. QUIC - HTTP/3 - built on top of UDP and removes TCP as the Transport Layer and optimizes some of the initial handshaking resulting in reduced packet sizes, I/O, etc. Multiplexed, persistent connection.

  7. A Proxy is used as an intermediary between an IP Address and a destination resource.

    • Note: Proxying typically refers to Forward Proxying.
    • Forward Proxying (a Forward Proxy Server) is connected to by a Client in order to mask the Client IP Address. The Forward Proxy then connects to a desired resource. (The Proxy masks the incoming IP Address and uses the desired outgoing IP Address.)
    • A Reverse Proxy is a hidden Gateway or intermediary between the Client and desired resource. (AWS API Gateway is paradigmatic - a Client makes an HTTP Request to a specific endpoint; the endpoint is associated with a configured Lambda Function which is called on behalf of the inbound Request.)
    • Forward Proxying is deliberately used or known to the Client. By contrast, in Reverse Proxying scenarios the Proxy acts unbeknownst to the Client.
    • Note that both kinds of Proxying involve the same sequential pattern: a Client C connects to an intermediary Proxy P to access a desired target resource R.
  8. An SSH Bastion serves as an intermediary between an incoming SSH connection and a desired resource.

    • The SSH Bastion serves to "jump" the incoming SSH connection to the desired resource after the inbound connection is authenticated and authorized.
    • In this way, destination resources are protected from being exposed publically and all inbound SSH requests can be verified/controlled.
  1. https://cybermeteoroid.com/ipv4-vs-ipv6-differences-you-need-to-know/
  2. https://www.geeksforgeeks.org/what-is-ipv6/
  3. https://quicwg.org/
  4. https://www.rfc-editor.org/rfc/rfc4632.html
  5. https://stackoverflow.com/questions/46616072/cidr-blocks-aws-explanation
  6. https://www.rfc-editor.org/info/rfc1918
  7. https://erikberg.com/notes/networks.html
  8. https://docs.oracle.com/cd/E19455-01/806-0916/ipconfig-32/index.html
  9. https://stackoverflow.com/questions/224664/whats-the-difference-between-a-proxy-server-and-a-reverse-proxy-server
  10. https://httpd.apache.org/docs/2.4/mod/mod_proxy.html#forwardreverse

Engineering: Status Codes

  1. 200 - Status OK: OK.
  2. 201 - Created successfully typically via POST: Created.
  3. 202 - Queued, batched successfully but not complete: Accepted.
  4. 301 - URL permanently moved
  5. 400 - Bad request - malformed: Bad Request
  6. 401 - Unauthorized, lacking credentials: Unauthorized.
  7. 403 - Forbidden, no authorization to access even with credentials: Forbidden.
  8. 404 - Page or resource can't be found at specified URL: Page Not Found.
  9. 405 - Wrong method
  10. 500 - Internal Server Error

Engineering: Common Port Numbers

  1. 443 - HTTPS/TLS default port number
  2. 8080 - HTTP/URL default port number
  3. 22 - SSH default port number
  4. 25 - SMTP default port number
  5. 1433 - MSSQL default port number

Note: 80/443 are usually the default port numbers used by gRPC since it uses HTTP/S - for example as in this article I wrote: https://goteleport.com/docs/api/getting-started/

Engineering: Logging Levels

Logging levels.

Common and Log4j

Some subset of the following is likely to be encountered in most logging frameworks (Log4j) or tools (Terraform).

In descending order of visibility:

  1. TRACE - Most granular, used to gain trace information throughout the entirety of an Application (including third-party dependencies).
  2. DEBUG - Used primarily when debugging or in Development to detail the innerworkings of written code.
  3. INFO - Most granular visibility used in Production.
  4. WARN - Something unusual or unexpected but less than an outright Exception or Application terminating event.
  5. ERROR - An Exception, planned for or unexpected.
  6. FATAL - An Application terminating event.

Of the above: INFO, WARN, and ERROR are the three primary logging levels that most teams will default to.

Java Util

java.util.logging.Level differs from the above by using (in descending order):

  1. SEVERE
  2. WARNING
  3. INFO
  4. CONFIG
  5. FINE
  6. FINER
  7. FINEST

https://docs.oracle.com/javase/8/docs/api/java/util/logging/Level.html

  1. https://sematext.com/blog/logging-levels/
  2. https://logging.apache.org/log4j/2.x/manual/customloglevels.html
  3. https://docs.oracle.com/javase/8/docs/api/java/util/logging/Level.html

Engineering: Naming Conventions

Some common naming conventions:

  1. Dry - database naming convention - Person table with a Person Class domain Object Relational Mapping (ORM)
  2. Snake Case - some_person
  3. Camel Case - somePerson
  4. Pascal - Person

Engineering: Basic Terminal Commands

Shebang

Specifies where to find and which bash executable to use as the execution environment.

On modern Linux systems it's recommended to use:

#!/usr/bin/env bash

Consult: https://tldp.org/LDP/abs/html/sha-bang.html and https://stackoverflow.com/questions/10376206/what-is-the-preferred-bash-shebang

Nano and Vim

Edit a file in Bash:

nano filename

Edit (enable Insert or Interactive Mode) from within the editor:

i
# toggle Insert or Interactive Mode - click i to enable
esc
# toggle Insert or Interactive Mode - exit

Save and exit editor:

:wq!

Exit editor without saving:

:q!
:qa

https://tecadmin.net/save-and-quit-vim/ and https://vim.rtorr.com/

Kill Task

Taskkill /IM node.exe /F
killall node

Open SSL

openssl genrsa -out key.pem 2048
openssl req -new -sha256 -key key.pem -out csr.csr
openssl req -x509 -sha256 -days 365 -key key.pem -in csr.csr -out certificate.pem

Grant Permission

Grant (all) Read, Write, and modify permissions:

sudo chmod +rwx file

SSH Keys

Generate a new SSH Public and Private Key pair using ssh-keygen:

ssh-keygen -t ed25519 -C "your_email@example.com"

Open the Public Key (with suffix .pub) and copy the Public Key into the necessary cloud resource account (GitHub, etc.).

Copy the Private Key into ~/.ssh (or equivalent) and associate with ssh-agent:

ssh-add ~/.ssh/id_ed25519

SSH Key mappings will be visible in /.ssh/known_hosts.

An excellent resource: https://docs.github.com/en/authentication/connecting-to-github-with-ssh

Mac: View Hidden Files

Hold-down: Command + Shift + Period (⌘⇧.) simultaneously to see hidden (system) files.

Mac: Multiple DB Browser for SQLite Clients

To open multiple instances of DB Browser for SQLite at the same time:

"/Applications/DB Browser for SQLite.app/Contents/MacOS/DB Browser for SQLite" &
  1. https://tldp.org/LDP/abs/html/sha-bang.html
  2. https://stackoverflow.com/questions/10376206/what-is-the-preferred-bash-shebang
  3. https://tecadmin.net/save-and-quit-vim/
  4. https://vim.rtorr.com/
  5. https://docs.github.com/en/authentication/connecting-to-github-with-ssh
  6. https://docs.github.com/en/authentication/connecting-to-github-with-ssh/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent
  7. https://sqlitebrowser.org/

Engineering: Mac Install Notes

Personal notes to get a development environment setup and configured on macOS 10.15.3.

Note: configuring .bash_profile is required for macOS-specific directory paths.

Note: configuring .bashrc is required for many Bash-specific paths.

Note: configuring .zshrc is required for many Zsh-specific paths (macOS Catalina+).

The notes below are written for Bash. In most cases, the same commands and export statements can be used interchangeably between Bash and Zsh (provided you use the correct configuration file).

I'm also not a huge fan of Homebrew (think Catalina system directory structure incompatibility, multiple lingering installs, version conflicts that typically or eventually arise locally, etc.) so the installation steps below tend to prefer the native installers wherever possible and appropriate.

Xcode

For developing Apple mobile, desktop, tablet, and watch, applications.

Swift installation:

  1. Use this link: https://apps.apple.com/us/app/xcode/id497799835 to get Swift + Xcode
  2. Test the installation using $ swift --version
  3. Test the Xcode installation using $ xcodebuild -version

CocoaPods:

  1. Requires Ruby gem manager
  2. Execute the following: $ sudo gem install cocoapods
  3. Use $ pod install to download the dependencies into your project
  4. NOTE: You must open the <PROJECT_NAME>.xcworkspace file rather than the <PROJECT_NAME>.xcodeproj file to build and compile your project (with Pods) correctly

Python 2.7

For Python, PIP, and Django apps.

Python 2.7 installation:

  1. Python 2.7 is pre-installed on macOS 10.x.
  2. Test the installation using $ python --version

PIP installation:

  1. Execute $ sudo easy_install pip
  2. Test the installation using $ pip --version

Django installation:

  1. Execute $ sudo pip install Django==3.0.3
  2. Alternatively, execute $ sudo pip install -r requirements.txt

Python 3

For Python 3.x.

Python 3.x installation:

  1. Download from: https://www.python.org/downloads/mac-osx/
  2. Test the installation using $ python3 --version

PIP:

  1. PIP is automatically installed as part of the Python 3 installation
  2. Upgrade PIP: $ python3 -m pip install --upgrade pip
  3. Install dependencies: $ python3 -m pip install -r requirements.txt
  4. List all installed libraries: $ pip freeze
  5. Clear out PIP cache: $ pip uninstall -y -r <(pip freeze)

Venv:

  1. Venv is automatically installed as part of the Python 3 installation
  2. Create a Venv environment: $ python3 -m venv VENV_ENV
  3. ... and activate it: $ source VENV_ENV/Scripts/activate

Ruby

For Ruby on Rails apps.

Ruby installation:

  1. Ruby 2.6.3p62 is pre-installed on macOS 10.x.
  2. Test the installation using $ ruby --version
  3. Test Ruby gem manager using $ gem -v

Rails installation:

  1. $ gem install rails

C++

  1. C++ GCC 4.2.1 is distributed with Xcode 11.3 (whose installation instructions are provided above).
  2. Test the installation using $ gcc --version

CMake:

  1. Download the CMake Unix/Linux Source (has \n line feeds) from: https://cmake.org/download/
  2. Extract the contents and execute $ bash bootstrap within the root directory
  3. Then execute $ cd Bootstrap.cmk and $ bash make
  4. Copy the following into .bash_profile using $ sudo nano ~/.bash_profile (and modify as needed):
export PATH=$PATH:/Users/USER_NAME/Desktop/cmake-3.16.6/bin
  1. Test the installation: $ cmake --version

Java

For Java Spring, Gradle, Maven, and Tomcat stacks.

Java installation:

  1. Use this link: https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
  2. Test the installation using $ javac -version

Apache Tomcat 8.5.x installation:

  1. Download the .zip from http://tomcat.apache.org
  2. Copy the extracted directory to your desktop (or some other appropriate location)
  3. Navigate to the ROOT/bin directory and execute the following Bash command sudo chmod +x *.sh
  4. Execute $ sudo bash startup.sh to launch Tomcat on the default port localhost:8080

Refer to the very helpful: https://wolfpaulus.com/tomcat/ for more comprehensive configurations.

Gradle installation:

  1. Download from: https://gradle.org/install/
  2. Copy the following into .bash_profile using $ sudo nano ~/.bash_profile (and modify as needed):
export PATH=$PATH:/Users/USER_NAME/Desktop/gradle-6.2/bin/
  1. Test the installation using $ gradle --version

Node

For NodeJS server and client apps.

NVM installation:

  1. $ curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.35.1/install.sh | bash
  2. $ sudo touch ~/.bash_profile
  3. $ sudo touch ~/.bashrc
  4. $ sudo nano ~/.bash_profile - copy the contents below into this file
  5. $ sudo nano ~/.bashrc - copy the contents below into this file
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"
[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion"

(Copy the above into configuration files.)

  1. Test the installation using $ nvm ls
  2. Download the desired version of Node using $ nvm install 10.0.0 && nvm use 10.0.0

Typescript installation:

  1. Execute npm install -g typescript

Golang

  1. Go to: https://golang.org/dl/ and download the newest version
  2. Install Go using the downloaded installer
  3. Test the installation using $ go version

Rust

For Rust apps.

Rust installation:

  1. Execute $ sudo curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
  2. Test the installation using $ rustc --version

Rust uninstallation:

  1. $ rustup self uninstall

Refer to the Rust documentation.

Disable DS_Store Files

Open Terminal App:

  1. Located in Applications > Utilities

  2. Enter the following command:

    defaults write com.apple.desktopservices DSDontWriteNetworkStores true

Consult: https://www.techrepublic.com/article/how-to-disable-the-creation-of-dsstore-files-for-mac-users-folders/

Brew

As of macOS 14.4.1 (23E224) you may need to install Homebrew for certain commonly used dependencies:

  1. Download Homebrew or by running:

    /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
  2. Run the commands:

    (echo; echo 'eval "$(/opt/homebrew/bin/brew shellenv)"') >> /Users/YOUR_USERNAME/.zprofile
    
    eval "$(/opt/homebrew/bin/brew shellenv)"

    to add Homebrew to PATH.

    Note that .zprofile is the default shell config from the official documentation.

  3. Run source .zprofile (or source .bash_profile) to reload the relevant terminal configs

  4. To install pkg-config on Mac: brew install pkg-config (required for Python3)

zsh

Duplicate any .bash_profile entries into .zprofile (or .zshrc):

  1. .zprofile is for login shells
  2. .zshrc is for interactive shells

Consult: https://unix.stackexchange.com/questions/71253/what-should-shouldnt-go-in-zshenv-zshrc-zlogin-zprofile-zlogout

Engineering: Windows Install Notes

It's been a while since I used a Windows machine for development. (Windows has traditionally been my preferred and most commonly encountered development environment of choice.) Have to say, I'm immensely impressed by Windows 11!

There are many great changes overall (and, I disagree with some critics - the new taskbar is much improved - centering the icons is a much better experience than having to peer over in the left corner on large screens/UHD 4K TVs).

Node

This will install Node, Git, and Python 2.7.

As of 5/21/2022 - Windows 11 Pro 21H2 22000.675.

For Node:

  1. Make sure to open your terminal of choice with the Run As Administrator option (equivalent in some ways to sudo).

  2. Download Git SCM

  3. Download NVM

  4. Download Python 2.7

  5. Rename the python executable to to python2 (node-gyp requires this nomenclature).

  6. Search locally for sysdm.cpl to open System Properties -> Advanced -> Environmental Variables in Windows 11.

  7. Add the install path to your User and System Variables.

  8. nvm install X.Y.Z && nvm use X.Y.Z (in Bash or ZSH) for the specific version of Node you want.

  9. I've had the best luck using Visual Studio 2017 (Community) rather than a newer version. Download after signing into Microsoft here.

  10. Make sure to tick off:

    • C# and Visual Basic Rosyln compilers
    • MSBuild
    • Static analysis tools
    • Visual Studio C++ core features
    • Windows 10 SDK (10.0.17763.0)
  11. Run npm config set msvs_version 2017

  12. Run npm i or whatever npm or node commands you desire.

Java

This will install Java 1.18, Maven, Tomcat 10 and Gradle.

Updated 8/25/2022 - Windows 11 Pro 21H2 22000.856.

  1. Download Java 1.18
  2. Download Maven 3.8.6
  3. Download Tomcat 10
  4. Download Gradle 7.5.1
  5. Add the relevant System variables under Advanced system settings > Environment Variables.
  6. GRADLE_HOME should point at your unzipped Gradle root directory.
  7. JAVA_HOME should point to your Java root directory.
  8. MAVEN_HOME should point to your unzipped Maven root directory.
  9. Then add the following to your Path.
  10. %JAVA_HOME%\bin,
  11. %GRADLE_HOME%\bin
  12. %MAVEN_HOME%\bin

Run the folloiwng commands to verify your installs.

  1. javac -version
  2. mvn -version
  3. gradle --version

Navigate to: http://localhost:8080/ after running the new(ish) Apache Tomcat10 executable.

Engineering: Azure VM Ubuntu 18.04

Brief setup notes for Azure VM Ubuntu 18.04 on Mac OSx.

  1. Create Public and Private Keys

Create a private key. A public key will be created automatically for you.

ssh-keygen -t rsa

Make note of your private key password - this is used to authenticate below.

After creating your Azure Ubuntu 18.04 VM. Take note of your Public IP.

Ensure that the default SSH port is left open.

  1. Connect from Local Machine
sudo ssh -i path/private_key user@public_ip

Use your private key password after connecting.

Ubuntu 18.04 VM

Slightly different setup than Ubuntu 14.04.

sudo apt-get update
sudo apt-get upgrade

Git

sudo apt-get update
sudo apt install git

Node

sudo apt install nodejs
sudo apt install npm

Java

Note: It's recommended to download Java 11+ directly from Oracle.

It's further recommended to use the OpenJDK 11.0.2 (and avoid other options).

Refer to: https://jdk.java.net/archive/

sudo apt-get update
wget "https://download.java.net/java/GA/jdk11/9/GPL/openjdk-11.0.2_linux-x64_bin.tar.gz"
sudo tar -xzvf openjdk-11.0.2_linux-x64_bin.tar.gz
sudo mv jdk-11.0.2 /usr/lib/jvm/

# Config
sudo nano /etc/environment

# Add the line below
# JAVA_HOME="/usr/lib/jvm/jdk-11.0.2/"

# Config
sudo nano ~/.bashrc
# Add the lines below
# JAVA_HOME=/usr/lib/jvm/jdk-11.0.2/
# PATH=$JAVA_HOME/bin:$PATH
source ~/.bashrc

# Verify
echo $JAVA_HOME
javac --version

Tomcat:

groupadd tomcat
useradd -s /bin/false -g tomcat -d /opt/tomcat tomcat
cd /opt/ 
sudo wget https://archive.apache.org/dist/tomcat/tomcat-9/v9.0.2/bin/apache-tomcat-9.0.2.tar.gz
tar -xzvf apache-tomcat-9.0.2.tar.gz
sudo mv apache-tomcat-9.0.2 tomcat

Then:

# Permissions
sudo chown -hR tomcat:tomcat tomcat
sudo chmod +xr tomcat/bin/

# Config
sudo nano ~/.bashrc

# Add the line below
# CATALINA_HOME=/opt/tomcat

# Config
source ~/.bashrc

# Verify
echo $CATALINA_HOME

You can now access /opt/tomcat/bin/ to execute: sudo bash startup.sh.

Your Tomcat server will be available by default on http://YOUR_IP:8080 (note the lack of https here).

Engineering: Misc

Commonly encountered and useful topics.

HTML Escape Characters

Can use the following in many places where typical spaces/whitespace isn't allowed (will often force a Whitespace to appear):

&nbsp;

https://mateam.net/html-escape-characters/

  1. https://mateam.net/html-escape-characters/

Engineering: OWASP

Summary of common top 10 OWASP identified security vulnerabilities:

  1. Broken Access Control
    • Violation of The Principle of Least Privilege
    • Privilege elevation
    • CORS misconfiguration
    • Insecure direct object references
    • Metadata manipulation
  2. Cryptographic Failures
    • Weak, old, or deprecated cryptographic algorithms
    • Lack of encryption: TLS, etc.
    • Low randomness used: pseudorandom, improper seed
  3. Injection
    • SQL Injection
    • Query Injection
    • Lacking character escaping, validation, filtering, or sanitization
    • Ability for users to execute queries from a client input
  4. Insecure Design
    • Ability of malicious actors to exploit weaknesses in system design, architecture, or business logic
    • Example: not restricting the number of tickets a person can buy, not having rate limiting, etc.
  5. Security Misconfiguration
    • Leaving Ports open
    • Insufficient security hardening
    • A system whose dependencies aren't updated
    • Disabled security features
    • Incompatible dependency versions
  6. Vulnerable and Outdated Components
    • Deprecated, out-of-date, or old software and/or dependencies
    • Lacking a critical security patch
  7. Identification and Authentication Failures
    • Lacks multi-factor authentication
    • Allows for weak passwords
    • Transmission of passwords in plaintext
    • Susceptibility to automated or brute-forcing attacks
  8. Software and Data Integrity Failures
    • Downloading unsigned dependencies from a remote repository
    • Downloading dependencies from an untrusted remote repository
    • A compromised update
    • Insecure deserialization
  9. Security Logging and Monitoring Failures
    • Exposing critical or sensitive data within accessible logs
    • Unclear or unhelpful logging messages
    • Lacking proper logging for critical events
    • Lacking the proper visibility into critical events, services, or systems
    • Lacking appropriate mechanisms to remediate problems: escalation, on-call rotations, etc.
  10. Server-Side Request Forgery
    • Improperly allowing a web application and/or its resources to be accessed or controlled remotely
    • Allowing an unauthenticated, unvalidated, or unauthorized agent to access a resource remotely
  1. https://owasp.org/www-project-top-ten/

Engineering: Application Security

A concise, quick, overview.

SOC2

  1. Every completed task must be documented.
  2. Every customer-facing or production task must have a corresponding ticket.

OWASP

  1. Available here: https://owasp.org/www-project-top-ten/
  2. Common Security Vulnerabilities include XSS, SQL Injection, String Timing attackes, etc.

Encrypt at Rest

  1. Encrypt database information within the database.
  2. Never store sensitive PII information in plaintext.

Ports

  1. Verify that all unneeded Ports are closed.

Secure Access Workstation

  1. Increasingly popular.
  2. An air-gapped physical machine allowing access to a single user at a single precise time.
  3. The air-gapped physical machine then connects to an SSH Bastion.

Engineering: OAuth 2.0

Grant Types

The four main Authorization Grant Types are as follows (note that the PKCE extension is now recommended for supplemental use in all other Grant Types):

Authorization Code

The flow:

  1. A resource R1 requests access to a protected resource R2.
  2. A user (who typically owns or manages both resources) is directed to an intermediary authentication server to authorize R1’s getting R2’s data (for some duration of time).
  3. After authorizing access, the user is redirected to a specified URL (often a view in R1): the Redirect URL.
  4. The most common and secure Grant Type. Use this by default.
  5. The paradigm or model OAuth flow that inspired the others.

Client Credentials

  1. Used in server-to-server access where one resource is able to access a second (common in micro-services).

Device Code

  1. Where a device with limited human input capabilities is granted access to a resource through another human input medium.
  2. Used to give IoT devices access to some endpoint. These devices may lack a keyboard.

Refresh

  1. Used to refresh a previously acquired valid token.

Deprecated

Note that the Password and Implicit Grant Types are now deprecated:

Password

  1. Deprecated.
  2. Essentially, your standard user-submitted password to get a token authentication scheme.

Implicit

  1. Deprecated.
  2. Often used in minimal security conditions (like a simple JavaScript app) to acquire a token from an auth server through a (registered) Client ID.
  3. Tokens are not refreshed but simply reacquired.
  1. https://oauth.net/2/

Engineering: Service Layer Architecture

  1. Presentation Layer - what the user sees and interacts with. A web Client, mobile device, terminal, etc. Corresponds to the View in MVC pattern.
  2. Service Layer - contains Business Logic, Controllers, Handlers, and Services that process Data being sent from a Client, and that return processed Data to the Client.
  3. Data Layer - persisted Data that exists at rest or used in-memory.

Engineering: Model View Controller

  1. Model - programmatic representations of Data defined in a Domain. Run Time, in-memory, and programmatic representations of persisted data that may or may not be managed 1:1 through an Object Relational Mapping framework like Hibernate.
  2. View - what's rendered and presented to the user.
  3. Controller - HTTP REST Controllers, Handlers that reside at sepecifc Endpoints (e.g. - URL, URL Path, IP Address, Domain Name, Port Number) etc. to handle specific HTTP Requests / HTTP Methods.

Engineering: Design Patterns

Some common design patterns.

Singleton

Refer to the other Singleton article.

Factory

The Factory design pattern creates copies of a specific type. The Factory itself is usually instantiated once, often a Singleton (single instance object).

// Java
public class Dood {
    //...
}

public Dood doodFactory() {
    //...
    getDooder() {
        return new Dood();
    }
}

Abstract Factory

Suppose that K is a subclass of T: make a Factory of kind T that returns a Factory or makes Objects of kind K (through abstract implementation.

The Abstract Factory design pattern is essentially the Factory design pattern but involves Abstract Factories in correspondence with and supporting an abstract (here, meaning higher-order) interface implementation hierarchy or abstract class hierarchy:

First, a (higher-order) interface:

// Java
public interface A {
    //...
}

Two implementations of that interface:

// Java
public class B implements A {
    //...
}

public class C implements A {
    //...
}

Now, we create a factory hierarchy to parallel the implementation hierarchy:

// Java
public interface AbstractFactory {
    public A createA();
}

We then create two further factories concrete implementations of AbstractFactory*`:

// Java
public class BFactory implements AbstractFactory {
    public A createA() {
        B b = new B();
        return b;
    }
}

public class CFactory implements AbstractFactory {
    public A createA() {
        C c = new C();
        return c;
    }
}

Model View Controller

Refer to the other Model View Controller article.

Decorator

Uses annotations like @Component to configure some item or add functionality.

Examples:

  1. Angular
  2. Java Spring
  3. Java EE

Adapter

The Adapter pattern takes two incompatible implementations, Interfaces, or Classes, and provides a way to bridge them. (Hence, "Adapter" - to adapt them to each other.)

// Java
public interface GermanPlugConnector {
    public void giveElectricity();
}

public class GermanElectricalSocket {
    public void plugIn(GermanPlugConnector plug) {
        plug.giveElectricity();
    }
}
// Java
public interface UKPlugConnector {
    public void provideElectricity();
}

public class UKElectricalSocket {
    public void plugIn(UKPlugConnector plug) {
        plug.provideElectricity();
    }
}

These are thus far incompatible and require an Adapter to bring them into harmony:

// Java
public class GermanConnectorToUKSocketAdapter implements UKPlugConnector {
    private GermanPlugConnector plug;

    public GermanConnectorToUKSocketAdapter(GermanPlugConnector plug) {
        this.plug = plug;
    }

    @Override
    public void provideElectricity() {
        plug.giveElectricity();
    }
}

Above we implemented the compatible Plug connector (UKPlugConnector) but overrode provideElectricity() so that it now invokes giveElectricity() on the incompatible Plug connector (GermanPlugConnector).

We have thus performed a little "switch-a-roo" on the main point of incompatibility by using a third interface to bring the two incompatible types into harmony thereby.

Now we explicitly invoke our adapter to representationally allow the GermanPlugConnector to plug into the UKElectricalSocket:

// Java
GermanPlugConnector plugConnector = //.. create a GermanPlugConnector

//Create a UKElectricalSocket
UKElectricalSocket electricalSocket = new UKElectricalSocket();

//We adapt the two
UKPlugConnector ukAdapter = new GermanConnectorToUKSocket(plugConnector);

//And now receive the electricity
electricalSocket.plugIn(ukAdapter);

Example corrected, updated, and modified from: http://www.vogella.com/tutorials/DesignPatternAdapter/article.html

  1. http://www.journaldev.com/1827/java-design-patterns-example-tutorial

Engineering: The Twelve-Factor App

Summary of Twelve-Factor App Principles:

  1. Codebase - One codebase tracked in revision control, many deploys.
    • A single codebase is used for an App.
      • E.g. - a specific repo in GitLab, BitBucker, Subversions, Azure Team Foundation, or GitHub.
    • Use a versioning system to track different deployments and changes.
    • Use a version control tool like Git or Subversion.
  2. Dependencies - Explicitly declare and isolate dependencies.
    • All dependencies are explicitly and expressly declared in a manifest.
    • Examples: package.json, pom.xml, requirements.txt, etc.
    • Use a package manager.
      • Dependencies should be versioned and modularized so their exact contents can be specified.
  3. Config - Store config in the environment.
    • App Configuration is kept with the App.
    • It should be stored as Environmental Variables within the Environment itself.
    • Example: passing Docker parameters that are exposed as Environmental Variables within an App Container.
    • App Configuration should be grouped by (Staging) Environment.
  4. Backing Services - Treat backing services as attached resources
    • Any Service or infrastructure dependency is treated as a Resource that's attached (containerized or configured along with the Service in question).
    • Examples: docker-compose.yml, Terraform, Cloud Formation
  5. Build, Release, Run - Strictly separate build and run stages.
    • Separate a deployment into Build, Release, and Run stages.
    • All Releases should have a unique identifier.
  6. Processes - Execute the app as one or more stateless processes.
    • Apps should be run as Processes.
    • They should be Stateless.
    • Example: containerizing and App in a Docker Container that spins up and initializations the App state each time it's run (state isn't preserved).
  7. Port Binding - Export services via port binding.
    • Preference for directly binding an App to a Port via a Webserver library (Jetty).
    • As opposed to using a Web Container (Tomcat).
  8. Concurrency - Scale out via the process model.
    • While one may need to spawn or multithread, these processes should use established tools and infrastructure to do so (Node exec, JVM Threads, Windows Process management).
    • Don't create/spawn Daemons when the above can be used out of the box.
  9. Disposability - Maximize robustness with fast startup and graceful shutdown.
    • Processes should be easy to terminate.
    • Deployed Services shouldn't be unnecessarily dependent on or entangled with other systems. They should be sufficiently decoupled so that they can started and stopped easily.
  10. Dev/Prod Parity - Keep development, staging, and production as similar as possible.
    • (Staging) Environments should be 1:1.
  11. Logs - Treat logs as event streams.
    • All logging events are printed to STDOUT and never routed within the app.
    • All log organization is handled at the Enivronment level.
    • Example:
      • Set Logging Levels within an AWS Environment so that it can be collated into a log ingestion Service.
      • Rather than routing logs using distinct say, Log4J LoggingAppenders.
  12. Admin Processes - Run admin/management tasks as one-off processes.
    • They should have a finite execution interval (should not be ongoing or occasion indefinite access to say PROD).
    • They should always use the most up-to-date config and code (not ad-hoc scripts or old code versions).
  1. https://12factor.net/

Engineering: Producers and Consumers

  1. Consumer - The resource that consumes, receives, and/or uses Messages or Events sent by the Producer.
  2. Producer - The emitter or sender of a Message or Event to the Consumer

Publish Subscribe and Message Queues

Several commonly encountered patterns that involve Producer and Consumer topics:

  1. Publish-Subscribe
    • Subscribers listen to, wait for, poll, or otherwise directly subscribe to a Publisher (specifically, a Publisher's Topics) which emits an Event or Message to all appropriate Subscribers.
    • Examples:
      • AWS Simple Notification Service (AWS SNS)
      • Apache Kafka
      • WebSockets
  2. Message Queue
    • A Message Queue sits in-between a Producer and Consumer. Often explicitly uses an intermediary called (or appropriately likened to) a Message Broker.
    • Examples:
      • ActiveMQ
      • RabbitMQ
      • AWS Simple Queue Service (AWS SQS)
  3. SOAP
    • A Consumer requests a WSDL from the Producer to mirror the Domain entities of a Producer (generates the shared contract for the Consumer).
    • This allows SOAP requests to be correctly made from the Consumer to the Producer and for the Consumer to correctly handle the corresponding SOAP responses.
  4. REST
    • A Client (Consumer) makes an HTTP Request to a Server (Producer) which returns an HTTP Response.

Note that this distinction isn't strictly mutually exclusive. While Quastor refers to RabbitMQ as thoroughly Pub-Sub, the actual RabbitMQ documentation refers to it as Publish/Subscribe (since it can be configured as such).

  1. https://blog.quastor.org/p/tech-dive-apache-kafka
  2. https://www.rabbitmq.com/tutorials/tutorial-three-spring-amqp

Code samples:

  1. https://github.com/Thoughtscript/java-reactive-pubsub/tree/main
  2. https://github.com/Thoughtscript/java_soap_wsdl_2023/tree/main

Engineering: Systems Design

Topics

  1. Separation of Concerns - Access, Visibility, proper decomposition of a Monolithic app, database access, subnets, private networks
  2. Concurrency - Thread, Thread and Worker Pooling, database Connection Pooling, use of Mutex and Thread-Safety, Asynchronous and Non-Blocking, Load Balancing, Sharding
  3. Events, Messaging, and Event Priority - use of Cron Jobs, Queues, Event Scheduling, Event Workflows
  4. Fault Tolerance - High Availability and Disaster Recovery, ProdEng manual overrides, data auditing and integrity, Graceful and Exponential Backoff, Dead Letter and Retry Queues
  5. Performance - Caching, Views and Indexes, Algorithm optimization, Elasticity, duplex connections
  6. Best Practices - Security, compliance, Integration Testing and environments, End to End Acceptance Tests, etc.

Some Approaches

Some common, famous, and or high-performance approaches:

  1. Decouple Events, Consumers, and Producers - Apache Kafka remains a top choice at many firms (PayPal, Uber, Dosh, etc.).
  2. A move from Relational Databases to Document Store Databases reducing overhead from Sharding, need for Cross-Database JOINS, SQL Table Normalization, and IO/latency.
    • Modifying SQL Table Columns is very expensive on large datasets that are distributed over many Shards.
  3. Extensive use of Sharding, Consistent Hashing (keeps Hash invariant as number of Shards increases), and Load Balancing across different Application Layers.
  4. Lots of Caching:
    • Clear distinction between READ and WRITE intensive resources (and corresponding resource responsibilities - READ Replica, etc.).
    • Materialized Views, etc.
    • Extensive use of Memcached, Redis, and/or Couchbase for READ intensive operations.

Electrical Engineering: Watts Volts Amps Ohms

Basic Equivalences.

  1. Resistance
    • Expressed as Ohms (Ω) below.
    • The aggregate opposition to Charge movement.
    • The opposition to Electrons as they flow through material.
  2. Current
    • Expressed as Amps (A) below.
    • Rate of flow of Charge.
  3. Voltage
    • Expressed as Volts (V) below.
    • Difference in Electrical Potential.
  4. Power
    • Expressed as Watts (W) below.
    • The amount of energy produced or consumed by an electrical device.

Electrons flow through material when there's an imbalance in Electrical Potential (Voltage). They flow at a certain rate (Current) which is modified by Resisting factors that inhibit how the Electrons move.

Ohms Law

Amps

Volts

Watts

  1. https://www.rapidtables.com/calc/electric/ohms-law-calculator.html
  2. https://www.usna.edu/ECE/ee301/Lecture%20Notes/EE301_Lesson_01_Voltage_Current_Resistance_Ohms.pdf

Electrical Engineering: Static Electricity

Electrons, Neutrons, and Protons

  1. Electrons - are Negatively Charged Particles since they have a negative net Electrical Charge:
    • 1.602176634 × 10−19 coulomb
  2. Neutrons - are Neutrally Charged Particles since they have no net Electrical Charge.
  3. Protons - are Positively Charged Particles they have a positive net Electrical Charge.

We usually count the Electric Charge of an Atom by the difference in Electrons and Protons.

Static Electricity

Static Electricity is an imbalance of Electrical Charge between two items (particularly on their surface).

Static Electricity shocks occur when there is:

  1. An imbalance of Electrical Charge such that:
    • One touching item is Negatively Charged (has many more Electrons)
    • The other item is Positively Charged (has far fewer Electrons)

Static (Electricity) cling occur when there is:

  1. When lightweight items stick to another owing to differences in Electrical Charge.
  2. Example: confetti sticking to a plastic balloon.

Static (Electricity) hair raising occur when there is:

  1. An excessive amount of positive Electrical Charge since two positively charged items will repel each other.
  2. (Two positively charged items will repel each other. Two negatively charged items will repel each other.)

Static Electricity grounding:

  1. Using a conductive material to keep a two items at the same, common, balanced, Electrical Charge.
  2. To use a conductive cord, wire, or other material to prevent Static Electricity shocks.
    • Commonly involves attaching metal wires to two items and connecting those wires to a ground block, wall, or metal object driven into the ground.

Prevention Techniques

  1. Ground items in the manner described above.
  2. Avoid certain materials like wool, polypropelene, etc. that have either a tendancy to generate Static Electricity or that lack the intrinsic ability to discharge it (e.g. - because they good insulators).
  3. Touch small metal objects frequently and then touch those objects against some grounded item (a metal doorframe, a metal floor lamp, etc.).
  4. Increase the amount of humidity in the air (since dry air increases the chance of Static Electricity).
    • This option should be pursued only as a last resort around electronics since humidity can damage sensitive electrical components (water is conductive).

Math: Probability

Probability calculates the likelihood of an Outcome occuring A Priori.

Probability Space

A Probability Space is defined as the Triple (Ω, E, P):

  1. A Probability Function:
    • P mapped to the interval [0,1]
    • P : E → [0,1]
  2. A Set of Outcomes Ω.
    • Say, a 6-sided dice is being represented: Ω = {1,2,3,4,5,6}
  3. An Event Space E of all Events such that for every ∀(e ∈ E)(e ⊆ Ω).
    • E.g. - e₀ ∈ E and e₀ = {2, 5} (Complex Event where it lands on any of 2, 5.)
    • E.g. - e₁ ∈ E and e₁ = {1} (Lands on 1.)

Events

https://en.wikipedia.org/wiki/Probability_space

Probability Rules

Definitions:

General Rules:

  1. Addition Rule:
    • (When A, B are Independent Events:) P(A ∪ B) = P(A) + P(B) - P(A ∩ B)
  2. Multiplication Rule:
    • (When A, B are Independent Events:) P(A ∩ B) = P(A) × P(B)
  3. Complement Rule:
    • P(A′) = 1 - P(A)
    • The likelihood of A′ occuring is 1 minus A.
  4. Conditional Probability:
    • P(A|B) = P(A ∩ B) / P(B)
    • Bayes' Theorem
    • Note that: P(A|B) ≠ P(B|A) (not Symmetric).

https://www.geeksforgeeks.org/probability-rules/

Description General Form A,B Independent A,B Dependent A,B Mutually Exclusive
Both, Conjunction P(A ∩ B) = P(A) × P(B) P(A ∩ B) = P(A) × P(B|A) P(A ∩ B) = 0
XOR, OR P(A ∪ B) = P(A) + P(B) - P(A ∩ B) P(A ∪ B) = P(A) + P(B) - P(A) × P(B) P(A ∪ B) = P(A) + P(B) - P(A) × P(B|A) P(A ∪ B) = P(A) + P(B) - 0
P(A ∪ B) = P(A) + P(B)
Conditional, Dependent P(A|B) = P(A ∩ B) / P(B) P(A|B) = (P(A) × P(B)) / P(B) P(A|B) = (P(A) × P(B|A)) / P(B) P(A|B) = 0 / P(B)

https://www.nasa.gov/wp-content/uploads/2023/11/210624-probability-formulas.pdf

  1. https://en.wikipedia.org/wiki/Probability_space
  2. https://www.geeksforgeeks.org/probability-rules/
  3. https://www.nasa.gov/wp-content/uploads/2023/11/210624-probability-formulas.pdf

Math: Permutations

Definitions

  1. Permutations: ₙPₐ = n! / (n - a)!
    • The number of Permutations of n items taken a-many at a time.
    • Consider Heap's Algorithm.
      • Given (the digit String) 012 the generated Permutations are [ '012', '102', '201', '021', '120', '210' ]
    • An arrangement of elements (with or without repetition) where order matters.
      • Order matters: 012 ≠ 102.
      • Counts String Permutations like 0012, 1102, etc. when repetition is allowed.
  2. Combinations: ₙCₐ = n! / (a! × (n - a)!)
    • The number of Combinations of n items taken a-many at a time.
    • An arrangement of elements (with or without repetition) where order doesn't matter.
      • Order doesn't matter: 012 = 102.
        • 012, 102 are elements of the same Equivalence Class using the generated Strings (from above).
        • Are considered the same String Combination despite the ordering of the digits 0, 1, 2.
      • Counts String Combinations like 0012, 1102, etc. when repetition is allowed.
  3. Variations
    • Where order matters: ₙVₐ = ₙPₐ
    • Where order doesn't matter: ₙVₐ = nᵃ
    • The number of Variations of n items taken a-many at a time.
    • An arrangement of elements (with or without repetition) where order matters.
      • Order matters: 012 ≠ 102.
      • Counts String Variations like 0012, 1102, etc. when repetition is allowed.

Relationships

  1. https://www.geeksforgeeks.org/permutations-and-combinations/
  2. http://webpages.charlotte.edu/ghetyei/courses/old/S23.3166/Permutations_Combinations_Variations.pdf
  3. https://ds.johnpospisil.com/probability/combinatorics/

Math: Statistics

Statistics calculates the likelihood of an Outcome occuring by observed Frequency, trend, or pattern.

Median, Mean, Mode

Given a sum S of L-many values of some (sorted) Set of events E:

Standard Normal Distribution

https://www.cse.wustl.edu/~garnett/cse515t/fall_2019/files/lecture_notes/5.pdf

const VARIANCE_U = 1
const MEAN_O_2 =  1
const X = 1
const NORMALIZATION_CONSTANT = 1 / Math.sqrt(VARIANCE_U * 2 * Math.PI)
const EXP = -1 * Math.pow(X - VARIANCE_U, 2) / (2 * Math.pow(MEAN_O_2, 2))
const GUASS = 1 / NORMALIZATION_CONSTANT * Math.pow(Math.E, EXP)

The Math 142 course I took delimits course content to the subtopics subheaded below. (We didn't cover calculating Z Scores from scratch.)

Standard Deviation

Of a Normal Distribution.

Given a set of values {X₀, ..., Xₙ}:

  1. Calculate the Mean (μ):
    • In Normal Distribution, the Mean, Mode, and Median are the same.
    • μ = (X₀ + ... + Xₙ)/n
  2. Calculate the Standard Deviation (σ) like so:
    • σ = √(((X₀-μ)² + ... + (Xₙ-μ)²) / n - 1)
  3. It's also helpful to remember the following:
    • μ ± σ (the range -σ <= μ <= σ or one Standard Deviation) covers 68%
    • μ ± 2σ (the range -2σ <= μ <= 2σ or two Standard Deviations) covers 95%
    • μ ± 3σ (the range -3σ <= μ <= 3σ or three Standard Deviations) covers 99.7%

Calculate Z Score

General approach:

  1. Calculate Z (the number of Standard Deviations) independently.
    • This can be calculated from a specified Percentage (say 10% of the total Area)
    • Or from a Score using the same equation: Z = (x - μ) / σ where x (lowercase) is the specified Score.
  2. Then use that value to solve for some X (some Score within the Standard Deviation) Z = (X - μ) / σ or Z * σ + μ = X.
  1. https://www.geeksforgeeks.org/statistics/?ref=shm
  2. https://www.geeksforgeeks.org/standard-normal-distribution/#normal-distribution-definition
  3. https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Math/PI
  4. https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Math/E
  5. https://www.cse.wustl.edu/~garnett/cse515t/fall_2019/files/lecture_notes/5.pdf
  6. https://mathblog.com/statistics/definitions/z-score/percentile-to-z-table/

Code samples:

  1. https://leetcode.com/problems/median-of-two-sorted-arrays/description/

Math: Finance Topics

Math and Finance topics often encountered in Programming/Software Engineering.

Amortization and Compound Interest

Non-Amortizing Loans (https://corporatefinanceinstitute.com/resources/commercial-lending/non-amortizing-loan/) are computed in a number of ways.

One way involves Basic Compound Interest divided by Total Months:

const PRINCIPLE = 1000
const ANNUAL_INTEREST_RATE = .07
const YEARS = 5

const TOTAL_LOAN_PAYMENTS = PRINCIPLE * Math.pow(1 + ANNUAL_INTEREST_RATE, YEARS)
console.log(TOTAL_LOAN_PAYMENTS) // 1402.5517307000005

const MONTHLY_PAYMENTS = TOTAL_LOAN_PAYMENTS / (YEARS * 12)
console.log(MONTHLY_PAYMENTS) // 23.375862178333342

Amortization is a standard technique to calculate Monthly or Total Payments (over time) as the Principle (of a Loan) is paid down:

  1. Standard approach to calculate Monthly Payments A: A = P × (i × (1 + i)ⁿ) / ((1 + i)ⁿ - 1)
     const ANNUAL_INTEREST_RATE = .07
     const PRINCIPLE = 1000
     const MONTHLY_INTEREST_RATE = ANNUAL_INTEREST_RATE / 12
     const NUMER = MONTHLY_INTEREST_RATE * Math.pow(1 + MONTHLY_INTEREST_RATE, 60)
     const DENOM = Math.pow(1 + MONTHLY_INTEREST_RATE, 60) - 1
     const MONTHLY_PAYMENTS = PRINCIPLE * NUMER / DENOM
     console.log(MONTHLY_PAYMENTS) // 19.801198540349468
  2. Computations vary for Monthly or Total Amortization Payments/Schedules/Table.
  3. There are also different kinds of Amortization: https://www.lendingclub.com/resource-center/business-loan/common-types-of-amortization-can-impact-you (and each is computed a bit differently).
  4. Computed long-hand here: https://www.codewars.com/kata/59c68ea2aeb2843e18000109

Precision and Rounding

Rounding and Precision effects in JavaScript (and languages with equivalent inbuilt Math functions).

Such effects are often encountered since quantities, amounts, etc. don't cleanly divide (and must therefore be Rounded or Truncated).

Most currencies and denominations (globally) are represented with Decimal (Floating Point) Precision (e.g. - $1.41).

https://www.public.asu.edu/~kamman/notes/finance/amortized-loan-notes.pdf helpfully described Amorization, Rounding, and Precision.

Verbiage

"To precision" generally means to the number of significant numbers or integers after the decimal point (the Character or sign .):

from decimal import *
getcontext().prec = 6
Decimal(1) / Decimal(7)
Decimal('0.142857')

To Cents

Additionally, it's common to represent "Dollar" amounts as "Cents" to avoid some of these issues.

Advantages:

  1. Don't have to worrry about post-decimal point sign Precision.
  2. Can help avoid unintentional duplicate Rounding Effects.

Disadvantages:

  1. Cents must still be Rounded.

Inbuilt Functions

Note that toPrecision() will unacceptably Truncate in some cases:

let num = 5.123456;

console.log(num.toPrecision()); // '5.123456'
console.log(num.toPrecision(5)); // '5.1235'

And, if the numbers are too big it'll return scientific notation. For example:

const N = 100000 / 24
console.log(N)
console.log(N.toPrecision(3)) // "4.17e+3"
console.log(N.toPrecision(6)) // "4166.67"

Mathematical Approaches and Techniques

JavaScript's Math.round(), Math.ceil(), and Math.floor() methods, combined with a multiplier, offer versatile solutions for computing to a specific Precision.

This is likely the most commonly encountered approach to Round to Precision 2 (from personal work experience in Finance and from widely available blogs like: https://www.zipy.ai/blog/how-to-round-to-at-most-two-decimal-places-in-javascript):

Round to nearest:

let number = 2.12556;
let rounded = Math.round(100 * number) / 100
console.log(rounded) // Output: 2.13

Always rounding up:

let number = 2.12556
let rounded = Math.ceil(number * 100) / 100
console.log(rounded) // Output: 2.13

Always rounding down:

let number = 2.12556
let rounded = Math.floor(number * 100) / 100
console.log(rounded) // Output: 2.12
  1. https://corporatefinanceinstitute.com/resources/commercial-lending/non-amortizing-loan/
  2. https://www.investopedia.com/terms/a/amortization.asp
  3. https://www.lendingclub.com/resource-center/business-loan/common-types-of-amortization-can-impact-you
  4. https://www.public.asu.edu/~kamman/notes/finance/amortized-loan-notes.pdf
  5. https://www.zipy.ai/blog/how-to-round-to-at-most-two-decimal-places-in-javascript

Code samples:

  1. https://www.codewars.com/kata/59c68ea2aeb2843e18000109

Docker: Overview

Docker is a platform for building, shipping, and running applications.

Key Tools

Images and Containers

Virtual Machines and Hypervisors

Virtual Machines Hypervisors Docker
Separate, dedicated, Operating System Kernels. Shared underlying Operating System Kernel. Uses Virtualization through an independent Hypervisor or Virtual Machine framework.
Dedicated system resources (CPU's, RAM, HDD). Shared system resources. Either.
Launches an entire simulated Operating System with machine state, etc. Light-weight, virtualized, containerized, encapsulated, and insolated environments. Either.
VirtualBox, Apple Virtualization Hyper-V, Docker VMM, the Windows Subsystem for Linux (WSL2, which runs on a subset of Hyper-V) Docker Desktop

Note: Docker can be configured in either fashion (is compatible with either the Virtual Machine or Hypervisor approach), but typically uses some kind of Virtual(ized) Machine, and is primarily for building, shipping, and flexibly running application.

  1. https://en.wikipedia.org/wiki/Hyper-V
  2. https://developer.apple.com/documentation/virtualization
  3. https://docs.docker.com/desktop/features/vmm/
  4. https://learn.microsoft.com/en-us/windows/wsl/about
  5. https://www.vinchin.com/vm-backup/hyper-v-vs-docker.html

Docker: Basic Commands

# Build from dockerfile
## Use this over `docker build - < Dockerfile`
## Note that the dockerfile copies in ANY files in this directory
docker build .

# Docker metrics and processes
docker images --all
docker info
## Get the CONTAINER ID <aa9f01c38d04>
docker stats

# Cleanup
## Remove image
docker rmi -f IMAGE_ID
## Remove container
docker rm CONTAINER_NAME
docker stop CONTAINER_NAME
docker system prune --volumes

Refer to: https://github.com/Thoughtscript/docker

Also: https://github.com/Thoughtscript/postgres_json_practice/blob/master/1%20-%20dockerfile/docker.sh

Code samples:

  1. https://github.com/Thoughtscript/docker
  2. https://github.com/Thoughtscript/postgres_json_practice/blob/master/1%20-%20dockerfile/docker.sh

Docker: On Mac ARM

Some recent changes for use on Mac.

Rosetta 2

Tested on an Apple M3 laptop with macOS 15.1.1 (24B91)

  1. Newer Macs (equipped with Apple's newish ARM CPU's) will require installing Rosetta 2 - a binary translator for converting x86 and ARM instructions.

  2. Since Docker virtualizes x86 operations, it must be installed on Mac now to use Docker:

     softwareupdate --install-rosetta
  3. Failing to do so will result in the following error:

     Rosetta is only intended to run on Apple Silicon with a macOS host using Virtualization.framework with Rosetta mode enabled.
  4. Make sure to update and restart Docker Desktop.

  5. Verify that the checkbox Settings > General > Virtual Machine Options > Use Rosetta for x86_64/amd64 emulation on Apple Silicon is selected.

With instructions for the above: https://www.docker.com/blog/docker-desktop-4-25/

Docker Commands

Since Docker on Apple's ARM CPU requires Rosetta 2 (and Rosetta 2 in turn requires Compose V2), the following Compose V2 command syntax is now enforced:

  1. docker-compose up is now docker compose up.

More on this change: https://docs.docker.com/compose/releases/migrate/

  1. https://support.apple.com/en-us/102527
  2. https://apple.stackexchange.com/questions/466197/docker-desktop-app-for-apple-silicon-requires-rosetta-2-why
  3. https://www.docker.com/blog/docker-desktop-4-25/
  4. https://docs.docker.com/compose/releases/migrate/

Docker: dockerfile

FROM postgres:13.0

# Execute init scripts
## These only have to be copied into /docker-entrypoint-initdb.d/
COPY init_json_sql.sql /docker-entrypoint-initdb.d/
FROM python:3.8.2

RUN echo "Creating working dir and copying files"
RUN mkdir /app
WORKDIR /app
COPY . .

# update pip globally within the container
RUN python3 -m pip install --upgrade pip
# update requirements by directory
RUN cd ml && python3 -m pip install -r requirements.txt
# run the machine learning scripts to save off the annModels within the image
# the logs for these scripts will now show in Docker Desktop
RUN cd ml && python3 ml-conjunction.py && python3 ml-disjunction.py && python3 ml-implication.py && python3 ml-negation.py && python3 ml-nand.py

# this is apparently a required dependency of SQLAlchemy
RUN apt-get update && apt-get install -y default-mysql-client default-libmysqlclient-dev
RUN cd server && python3 -m pip install -r requirements.txt
# host and ports are set in server/main.py but they could be passed below instead
# these are required to bind the ips and ports correctly
CMD [ "bash", "run.sh" ]

Useful Dockerfile Commands

  1. https://docs.docker.com/engine/reference/builder/

Code samples:

  1. https://github.com/Thoughtscript/python_api_2023
  2. https://github.com/Thoughtscript/project_euler_2024
  3. https://github.com/Thoughtscript/mearn_2024
  4. https://github.com/Thoughtscript/erc20_2024
  5. https://github.com/Thoughtscript/more_python_api_2024

Docker: Images

Layers

Docker Images are assembled and built up using multiple Layers:

Docker: Storage

Docker Volumes

Docker Volumes are persistant data stores for Containers.

In Docker Compose, a Volume is declared in its own block, then associated with each Service (source, typically the Volume name) along with a destination path (a file path or directory within the Volume) where the persisted data will reside.

services:
  mongo:
    image: bitnami/mongodb:7.0.9
    ports:
      - "27017:27017"
    volumes:
      - 'mongodb_data:/bitnami/mongodb'
    environment:
      - MONGODB_ROOT_USER=rootuser
      - MONGODB_ROOT_PASSWORD=rootpass
      - MONGODB_USERNAME=testuser
      - MONGODB_PASSWORD=testpass
      - MONGODB_DATABASE=testdatabase
      # This is required on Apple Silicon https://github.com/docker/for-mac/issues/6620
      # https://github.com/bitnami/containers/issues/40947#issuecomment-1927013148
      - EXPERIMENTAL_DOCKER_DESKTOP_FORCE_QEMU=1
    networks:
      - testnet

  node:
    build:
      context: ./node
      dockerfile: dockerfile
    ports:
      - '8888:8888'
    depends_on:
      - mongo
    networks:
      - testnet
    restart: unless-stopped

  react:
    build:
      context: ./react
      dockerfile: dockerfile
    ports:
      - '443:443'
      - '1234:1234'
    depends_on:
      - node
    networks:
      - testnet
    restart: unless-stopped

  angular:
    build:
      context: ./angular
      dockerfile: dockerfile
    ports:
      - '4200:4200'
    depends_on:
      - node
    networks:
      - testnet

volumes:
  mongodb_data:
    driver: local

networks:
  testnet:
    driver: bridge

https://github.com/Thoughtscript/mearn_2024/blob/main/docker-compose.yml

Bind Mounts

Bind Mounts are Volumes that are Mounted from a specific location on the host machine into the Docker Image and Container.

Example: host directory ./static is bound to Docker Container file path: /opt/app/static.

# docker compose config
services:
  frontend:
    image: node:lts
    volumes:
      # Bind mount example
      - type: bind
        source: ./static
        target: /opt/app/static
volumes:
  myapp:

https://docs.docker.com/engine/storage/bind-mounts/

Dockerfile Volumes

A slight variation on the topics above. dockerfile Volumes can define a Mount Point at a specific location. For example, like so:

FROM ubuntu

USER myuser

RUN mkdir /myvol
VOLUME /myvol
RUN chown -R myuser /myvol

This can be used in tandem with chown priviliges and ECS_CONTAINERS_READONLY_ACCESS to restrict what's writeable within a Container to exactly the VOLUME. (AWS ECS will allow a VOLUME to be writeable even if the rest of the Docker Image and Container aren't.)

https://docs.aws.amazon.com/config/latest/developerguide/ecs-containers-readonly-access.html

https://docs.docker.com/reference/dockerfile/#volume

Local Files

  1. var/lib/docker - default Docker directory used to store data for Containers, Docker Images, and Volumes.
  2. var/lib/docker/volumes - location from where Docker Mounts a Volume.
  3. Data is removed via: docker system prune -a.

Docker Storage Drivers

Docker Storage Drivers facilitate the Layered architecture and caching used when building Docker Images and running Containers.

  1. https://docs.docker.com/engine/storage/volumes/
  2. https://docs.docker.com/engine/storage/bind-mounts/
  3. https://docs.docker.com/reference/dockerfile/#volume
  4. https://docs.aws.amazon.com/config/latest/developerguide/ecs-containers-readonly-access.html

Code samples:

  1. https://github.com/Thoughtscript/mearn_2024/blob/main/docker-compose.yml

Kubernetes: Overview

This section summarizes several specific introductory concepts/topics.

https://kubernetes.io/docs/reference/kubectl/quick-reference/

Syntax

Full reference: https://kubernetes.io/docs/reference/kubectl/generated/

Some commonly encountered Command verbs of interest:

Some commonly encountered Entity Kinds of interest:

Kind Name     Shortname    
pod
pods
po
node
nodes
no
namespace
namespaces
ns
service
services
svc
deployment
deployments
deploy
# Display all Kinds, Names, Shortnames, ApiVersions (for config yaml)
kubectl api-resources

Generally, kubectl COMMAND ENTITY defines a valid operation where:

  1. COMMAND is one of the above Command verbs; and
  2. ENTITY is one of the above Entity Kinds.

Note: A Shortname can be substituted for any associated Kind or Name.

  1. https://kubernetes.io/docs/reference/kubectl/quick-reference/
  2. https://kubernetes.io/docs/reference/kubectl/generated/

Kubernetes: Main Entities Overview

I divide Kubernetes entities into a few kinds:

  1. Meta-level, organizational, groupings.
  2. Kubernetes Kinds (think primitives).
  3. Kubernetes Workload Management that assist in deploying and managing Pods.

Groupings

Grouping entities. These group and help manage Kubernetes Pods and Kubernetes Nodes.

Namespace

Cluster

Service

Basic

Think the basic conceptual units, primitives, or entities.

Node

Pod

Container

Image

Workload Management

ReplicaSets

Deployment

Kubernetes: Kubectl Inspection Quick Reference Sheet

Some common and useful kubectl commands to inspect running or deployed Resources.

minikube will install the default namespace automatically.

Commands

# See all resources
kubectl get all
# Display all Pods in every Namespace
kubectl get po --all-namespaces
# Display all pods in the `default` Namespace (will be empty by default)
kubectl get po -n default
# Display all Pods
kubectl get pods

# Display all Pods with the Label "label: example"
kubectl get po --selector label=example
# Display all Pods with the Labels "label: example" and "x: y"
kubectl get po --selector label=example,x=y
# Display all Namespace
kubectl get namespaces

# Display all services in every Namespace
kubectl get services --all-namespaces
# Display detailed information about test - Containers, status, etc.
kubectl describe namespace test

# Display detailed information about pod newpods-4npfh
kubectl describe po newpods-4npfh 

Syntax

Generally:

  1. kubectl get KIND or kubectl get NAME defines a valid operation.
  2. kubectl describe KIND or kubectl describe NAME defines a valid operation.
  1. https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
  2. https://kubernetes.io/docs/reference/kubectl/generated/kubectl_describe/

Kubernetes: Kubectl Create Quick Reference Sheet

Some common and useful kubectl commands supporting the creation of Resources.

Commands

# Create Namespace test
kubectl create namespace test

# Then review the created resource
kubectl get namespaces
# Create a Pod from a file (see below)
kubectl create -f pod-definition.yaml

# Create or update Pod from a file (see below)
kubectl apply -f pod-definition.yaml

Create vs. Apply

kubectl create kubectl apply
Imperative - a specific operation. Declarative - specifies a target state.
Creates a Resource from a file or directly from within the CLI. Creates a Resource by way of a manifest or configuration file.
Will error if Resource already exists. Won't error if Resource already exists.
Creates a new Resource if it doesn't exist. Updates a Resource if it exists, creates one if it doesn't.

https://theserverside.com/answer/Kubectl-apply-vs-create-Whats-the-difference

Note: both kubectl create and kubectl apply require configuration files to create Pods with a specific Docker Image. --image is supported for kubectl create deployment.

Syntax

Generally:

  1. kubectl create KIND and kubectl create NAME defines a valid operation.
  1. https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#create
  2. https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
  3. https://theserverside.com/answer/Kubectl-apply-vs-create-Whats-the-difference

Kubernetes: Kubectl Run Quick Reference Sheet

Some common and useful kubectl run commands.

Commands

# Create and run a Pod named nginx using the Image nginx w/out a config file
kubectl run nginx --image=nginx

Syntax

Generally, kubectl run POD_NAME --image=DOCKER_IMAGE defines a valid operation.

  1. https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#run

Kubernetes: Kubectl Delete and Update Quick Reference Sheet

Some common and useful kubectl commands supporting the deletion and updating of Resources.

Commands

# Delete Pod webapp
kubectl delete pod webapp

# Delete and recreate a resource using a modified configuration file
kubectl replace --force -f mypodconfig.yaml

# Create or update Pod from a file (see below)
kubectl apply -f pod-definition.yaml

# See the YAML config for a resource and edit it
kubectl edit rs new-replica-set

Note: Kubernetes Pods aren't "moved" (say from one Kubernetes Node to another) they are deleted in the first and recreated in the second.

Apply vs Replace

kubectl apply kubectl replace
Updates a Resource if it exists, creates one if it doesn't. Updates if a Resource exists.
Won't typically error. Will error if the Resource doesn't exist.

Consider the following scenario:

  1. kubectl create deployment --image=nginx nginx --dry-run=client -o yaml > nginx-deployment.yaml which creates the following YAML output:
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      creationTimestamp: null
      labels:
        app: nginx
    name: nginx
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: nginx
      strategy: {}
      template:
        metadata:
          creationTimestamp: null
          labels:
            app: nginx
        spec:
          containers:
          - image: nginx
            name: nginx
            resources: {}
    status: {}
  2. kubectl apply -f nginx-deployment.yaml
  3. kubectl describe deployment nginx displays:
      Name:                   nginx
      Namespace:              default
      CreationTimestamp:      Thu, 19 Dec 2024 15:37:38 -0600
      Labels:                 app=nginx
      Annotations:            deployment.kubernetes.io/revision: 1
      Selector:               app=nginx
      Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
      StrategyType:           RollingUpdate
      MinReadySeconds:        0
      RollingUpdateStrategy:  25% max unavailable, 25% max surge
      Pod Template:
        Labels:  app=nginx
        Containers:
         nginx:
          Image:         nginx
          Port:          <none>
          Host Port:     <none>
          Environment:   <none>
          Mounts:        <none>
        Volumes:         <none>
        Node-Selectors:  <none>
        Tolerations:     <none>
      Conditions:
        Type           Status  Reason
        ----           ------  ------
        Available      True    MinimumReplicasAvailable
        Progressing    True    NewReplicaSetAvailable
      OldReplicaSets:  <none>
      NewReplicaSet:   nginx-676b6c5bbc (1/1 replicas created)
      Events:
        Type    Reason             Age    From                   Message
        ----    ------             ----   ----                   -------
        Normal  ScalingReplicaSet  2m11s  deployment-controller  Scaled up replica set nginx-676b6c5bbc to 1
  4. Modifying the YAML document like so:
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      creationTimestamp: null
    name: nginx
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: nginx
      strategy: {}
      template:
        metadata:
          creationTimestamp: null
          labels:
            app: nginx
        spec:
          containers:
          - image: nginx
            name: nginx
            resources: {}
    status: {}
  5. Then reapplying: kubectl apply -f nginx-deployment.yaml and inspecting via kubectl describe deployment nginx:
      Name:                   nginx
      Namespace:              default
      CreationTimestamp:      Thu, 19 Dec 2024 15:37:38 -0600
      Labels:                 <none>
      Annotations:            deployment.kubernetes.io/revision: 1
      Selector:               app=nginx
      Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
      StrategyType:           RollingUpdate
      MinReadySeconds:        0
      RollingUpdateStrategy:  25% max unavailable, 25% max surge
      Pod Template:
        Labels:  app=nginx
        Containers:
         nginx:
          Image:         nginx
          Port:          <none>
          Host Port:     <none>
          Environment:   <none>
          Mounts:        <none>
       Volumes:         <none>
        Node-Selectors:  <none>
        Tolerations:     <none>
      Conditions:
        Type           Status  Reason
        ----           ------  ------
        Available      True    MinimumReplicasAvailable
        Progressing    True    NewReplicaSetAvailable
      OldReplicaSets:  <none>
      NewReplicaSet:   nginx-676b6c5bbc (1/1 replicas created)
      Events:
        Type    Reason             Age   From                   Message
        ----    ------             ----  ----                   -------
        Normal  ScalingReplicaSet  104s  deployment-controller  Scaled up replica set nginx-676b6c5bbc to 1

Omitting a field and kubectl applying will remove the fields on running Resources. (It's not a Patch operation per se - fields that aren't supplied aren't ignored or automatically populated in most cases.)

Edit

kubectl edit allows for:

  1. The live editing of a Resource through its YAML configuration. This configuration needn't be saved (but most be done so explicitly if updated YAML is required).
  2. Automatic updating of the Resource post-YAML modification. One doesn't need to run a second command to apply the new changes or remove prior Resource versions.

For this reason, kubectl edit is likely to be the preferred way to quickly modify Resources.

Syntax

Generally:

  1. kubectl delete KIND and kubectl delete NAME defines a valid operation.
  2. kubectl edit KIND and kubectl edit NAME defines a valid operation.
  1. https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#delete
  2. https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#replace
  3. https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
  4. https://kubernetes.io/docs/reference/kubectl/generated/kubectl_edit/
  5. https://medium.com/@grpeto/kubectl-edit-performing-magic-in-kubernetes-684669a8bccd

Kubernetes: Install Minikube

I've chosen minikube here since it's the Kubernetes distribution that's used during the CKA exam and probably the easiest to get set up locally. It's a bit underpowered for Enterprisey stuff but suffices for these purposes.

Gotcha's

  1. You must have docker installed for the typical (and default) minikube installation.
  2. To install minikube in a cloud VM you'll need at least 2 CPU cores (Medium size on AWS).

Minikube Installation

Basically follow this: https://minikube.sigs.k8s.io/docs/start/

Download the correct version after installing docker.

# Verify install 
minikube version

# Start minikube
minikube start

Minikube Removal

Some relevant commands:

# Stop your local cluster
minikube stop

# Removes your local minikube cluster (and start over)
minikube delete

# Removes all local clusters
minikube delete --all
minikube delete --purge

Helpful Minikube Commands

Helpful commands when using minikube.

Display minikube resources with a UI:

minikube dashboard

Display addons and common tools that can be enabled and added:

minikube addons list

https://minikube.sigs.k8s.io/docs/handbook/controls/

Kubectl

Installing kubectl directly is likely a test question: https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/

Kubernetes: Kubectl Docker Discussion

Discussion and example highlighting major differences between using Docker Images in Docker and using them in Kubernetes.

Docker

Terminal One:

# Download image
docker pull elixir

# Open an interactive terminal through Elixir in Docker
## Note that calling this without -it will not allow commands to be passed to iex
docker run -it elixir

Terminal Two (Admin Console - use as needed)

# Display the container and obtain the CONTAINER ID
docker stats

# Display the downloaded images and obtain the IMAGE ID
docker images --all

# Remove container
docker stop CONTAINER_ID
docker rm CONTAINER_ID

# Remove downloaded image
docker rmi -f IMAGE_ID

# Remove all containers and volumes
docker system prune

# Remove all containers and volumes
docker system prune --volumes

So, in the above, one is able to pull elixir and run bash terminal.

Local Docker With Kubernetes

Close the other terminals and check out this awesome blog: https://www.yeahshecodes.com/docker/run-docker-images-locally-with-minikube

I've also found that using docker pull directly through kubectl will often result in Backoff Errors (CrashLoopBackOff) when using minikube. For example (in minikube dashboard):

Name Reason Message Source Sub-object Count First Seen Last Seen
elixir.1803b10a81941af5
BackOff
Back-off restarting failed container elixir in pod elixir_default(cf328542-5b03-4e76-b690-7f65ca8f3db2)
kubelet
minikube
spec.containers{elixir}
6
a minute ago
...
...
...
...
...
...
...
...
elixir.1803b0fec8dc0e33
Pulling
Pulling image "elixir"
kubelet minikube
spec.containers{elixir}
5
16 minutes ago
14 minutes ago
...
...
...
...
...
...
...
...

One might think using kubectl run elixir --image=elixir --namespace=test without additional configs would spin up a viable Pod but this is not the case for nearly every official Docker Image I tested in minikube.

After some further tinkering and research, additional careful configuration is indeed required (e.g. - creating a Deployment resource file in YAML) and is explained in further detail here: https://spacelift.io/blog/kubernetes-imagepullpolicy

However, there is another work around. Per the blog post above and some personal tinkering, one can use a docker-compose.yaml file with supplied "alias" (really a customized and conforming Image name):

version: "3.9"
services:
  nginx:
    image: localtest:v0.0.1
      build: .
      ports:
        - "80:80"

and, a dockerfile:

FROM nginx:latest
EXPOSE 80

I tinkered with the config supplied in the above blog post and narrowed it down to just the above. That appears to be the most minimal config to use regulard Dockerfiles and Docker builds with minikube + Kubernetes (since minikube is the Kubernetes distribution for local development).

Now in an additional terminal:

# Ensure both minikube and docker are in the same env
eval $(minikube docker-env)

# Build from dockerfile
docker-compose up -d

# View Docker Images ages (from within minikube!)
docker images --all

# Create Namespace test
kubectl create namespace test

# Set namespace context
kubectl config set-context --current --namespace=test

# Review all resources in Namespace
kubectl --namespace test get all

# Deploy the Docker Image as a Pod to the Container
kubectl run localtest --image=localtest:v0.0.1 --image-pull-policy=Never --namespace test

# Review all resources in Namespace
kubectl --namespace test get all

Make sure that both minikube and docker are in the same env and visible to each other otherwise one will also encounters Backoff Errors via the command: eval $(minikube docker-env).

Also, do make sure that you supply --image-pull-policy=Never or --image-pull-policy=IfNotPresent as needed along with the appropriate and supplied tag version on the image per: https://spacelift.io/blog/kubernetes-imagepullpolicy

Kubernetes: Docker Compose Translation

Translating Docker Compose to Kubernetes.

Configuration Files

Kubernetes deployment.yml's and service.yml's are typically configured and applied together (kubectl apply -f python-deployment.yaml,python-service.yaml):

  1. deployment.yml - defines Volumes, Commands to be executed, Docker Images, and Containers.
    • Contains many of the configuration settings one would find in docker-compose.yml.
  2. service.yml - specifies Ports, Port Mappings, and generally connects Pods in a Deployment to shared Network resources.
  3. configmap.yml - injects Secrets, Environment Variables, initialization scripts, and files into the Kubernetes context.
    • Associated with a Deployment within deployment.yml.
  1. https://github.com/Thoughtscript/python_pyramid_kub_2024
  2. https://github.com/Thoughtscript/python_pyramid_kub_2024/tree/main/kubernetes

Linux Foundation CKA: Overview

Note: These notes will do "double duty" for both the KCNA and CKA exams.

The CKA is being rewritten 1/15/2025-ish.

I strongly recommend taking the excellent KodeKloud KNCA and CKA courses!

Cluster Architecture

  1. Control Plane - manage, plan, schedule, and monitor Nodes
    • etcd - Key value persistence store used to back all Kubernetes Cluster data.
      • Listens on Port 2379 by default.
      • Typically, only kube-apiserver will directly interact with etcd.
    • kube-apiserver - exposes the Kubernetes API to allow the following functionalities to be performed: reference.
    • kube-scheduler - Control Plane component that listens for newly created Kubernetes Pods with no assigned Kubernetes Node and assignes them one.
    • kube-controller-manager - runs and manages Kubernetes Control Plane Controllers.
      • Example: Node Controller - responsible for noticing and responding when Nodes fail.
      • Example: Job Controller - watches for Kubernetes Job objects that represent one-off tasks then creates Pods to run those tasks to completion.
    • Cloud Controller Manager (Optional) - embeds Cloud Provider-specific control logic.
  2. Nodes - used to group Resources, run one or more Kubernetes Pods, and managed by the Control Plane. Node components run on every Node.
    • kubelet - an Agent that runs on every Node in the Cluster.
      • Ensures that Containers are running in a Pod.
    • kube-proxy (Optional) - maintains Network rules on Nodes allowing communication to and between Pods (inside or outside the Cluster).
    • Container Runtime - manages the execution and lifecycle of Containers within the Kubernetes environment.
      • Supports any Container Runtime implementing the Kubernetes Container Runtime Interface (CRI).
      • Note: support for Docker through Dockershim is now deprecated. containerD has been targeted as the go to for Docker going forward.

Nodes, Pods, and Containers

  1. Many Containers can run in a single Kubernetes Pod.
  2. Many Kubernetes Pods can run in a single Kubernetes Node.

kube-proxy vs. kubectl proxy

  1. kube-proxy (Optional) - maintains Network rules on Nodes allowing communication to and between Pods (inside or outside the Cluster).
    • Exists on Nodes.
  2. kubectl proxy - proxy for kube-apiserver.
    • Exists in the Control Plane.

Namespaces

  1. Some Kubernetes Resources can be organized, isolated, and grouped into Kubernetes Namespaces. Examples:

    • Kubernetes Deployments
    • Kubernetes Service Accounts
    • Kubernetes Nodes
  2. Some Resources are Global (available regardless of the current Kubernetes Namespace or accessible within any of them). Examples:

    • Kubernetes Volumes
    • Kubernetes Services (not to be confused with Kubernetes Service Accounts)
    • Kubernetes User Accounts

Default Namespaces

These are automatically created when a Kubernetes Cluster is created through normal means:

https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/

  1. https://kubernetes.io/docs/concepts/architecture/#etcd
  2. https://kubernetes.io/docs/concepts/architecture/#kube-apiserver
  3. https://kubernetes.io/docs/concepts/architecture/#kube-controller-manager
  4. https://kubernetes.io/docs/concepts/architecture/cloud-controller
  5. https://kubernetes.io/docs/concepts/architecture/#node-components
  6. https://kubernetes.io/docs/concepts/architecture/#container-runtime
  7. https://kubernetes.io/docs/reference/kubectl/generated/kubectl_proxy
  8. https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/

Linux Foundation CKA: etcd

etcd is the backing store for Kubernetes Clusters. It's typically provisioned automatically (along with other Control Plane Resources) but can be manually added (say in a Kubernetes Cluster that's manually provisioned and/or with customized/custom coded Resources, Controllers, etc.).

Typically, only kube-apiserver will directly interact with etcd.

etcd listens on Port 2379 by default (both within and outside of Kubernetes).

Useful Commands

etcdctl put mykey myvalue # add a key value pair
etcdctl get mykey # read key value
etcdctl del mykey # delete pair mykey, myvalue by key
etcdctl watch mykey  # prints out changes to mykey's value
etcdctl lease grant 60 # create a Lease
## prints Lease ID
# > lease 32695410dcc0ca06 granted with TTL(60s)

# automatically add mykey with auto eviction
etcdctl put mykey myvalue --lease=1234abcd ## --lease is assigned to a the Lease ID, not the chronotype itself

https://etcd.io/docs/v3.4/dev-guide/interacting_v3/

  1. https://etcd.io/docs/v3.4/dev-guide/interacting_v3/

Linux Foundation CKA: Pods

YAML

# pod-definition.yaml
apiVersion: v1
kind: Pod
metatdata:
  name: myapp-pod
  labels:
    app: myapp
spec:
  containers:
    - name: nginx-container
      image: nginx

Required fields (for any Resource):

  1. apiVersion - typically v1 but there are some outliers.
  2. kind - specifies the kind of Resource (Pod, Service, Deployment, etc.)
  3. metatdata - name of the Resource and any user-specified labels
  4. spec - configuration, attached Resources, etc.

Creating

# Create a Pod from a YAML configuration
## Note: there's no way to pass an Image in as a flag with create!
kubectl create -f pod-definition.yaml
kubectl apply -f pod-definitional.yaml

# Create and run a Pod named nginx using the Image nginx w/out a config file
kubectl run nginx --image=nginx

Inspection

# Get all Pods regardless of Namespace
kubectl get po --all-namespaces

# Detailed inspection of Pod
kubectl describe po myapp-pod

Updating

To update a Pod configuration file you can first destroy the running Pod then recreate the new Pod with new configuration:

# Using the above config
kubectl delete pod myapp-pod

kubectl create -f pod-definition.yaml

Or, use:

  1. kubectl replace -f pod-definition.yaml - to replace with a Resource using a modified configuration file.
  2. kubectl edit rs new-replica-set - to both display the config for the Resource (in YAML format) and edit it. (Editing alone may not be sufficient to update the actually running Resources - in a Replica Set, you'd want to delete the running Pods so they deploy with the updated configuration.)

Deployments

  1. See Deployments.
  2. Can be deployed as Kubernetes Static Pods that can be tracked by the Control Plane but otherwise exist independently. When using Kubernetes Static Pods, one will be restricted to a single Kubernetes Node and its local kubectl (and while the Control Plane can be made aware of the Kubernetes Static Pod, it has no other control over managing, modifying, or interacting with it).

Linux Foundation CKA: Deployments

Kubernetes Pods will typically be deployed with the following considerations:

  1. A certain number of Kubernetes Pods will need to be available at all times.
  2. Updates to Kubernetes Pods must be careful sequenced.

The above requirements are typically satisfied by:

  1. Kubernetes Replica Sets which allow a number of Kubernetes Pods to be specified through the replicas field.
  2. Kubernetes Rollout Strategies

Kubernetes Deployments therefore wrap both Kubernetes Replica Sets and provide the additional management tooling required for carefully deploying new changes.

Kubernetes Deployments will be assigned to various available Kubernetes Nodes based on configured Affinities, Taints, Tolerations, etc.

Deployment YAML

apiVersion: apps/v1
kind: Deployment
metadata:
  name: python-pyramid
  labels:
    app: python-pyramid-postgres
    service: python-pyramid
spec:
  replicas: 3
  selector:
    matchLabels:
      app: python-pyramid-postgres
      service: python-pyramid
  template:
    metadata:
      labels:
        app: python-pyramid-postgres
        service: python-pyramid
    spec:
      containers:
      - name: python-pyramid
        image: python-pyramid:v0.1
        imagePullPolicy: Never
        ports:
        - containerPort: 8000
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1

Rollout Strategies

  1. RollingUpdate
    • Default.
    • Will gradually update Kubernetes Pods a few at a time (depending on the configured settings).
  2. Recreate
    • Kills all Kubernetes Pods before new ones (with the applied updates) are created.

Two optional fields help to control this process:

Stateful Sets and DaemonSets

  1. Kubernetes Stateful Sets
    • Are distinct from Kubernetes Replica Sets but are akin in certain specific ways. (e.g. - can specify replicas and used to manage a group of Kubernetes Pods). Is not a subkind of rs.
    • Stateful information is retained about a Kubernetes Stateful Set as it recreated and destroyed.
    • This guarantees certain settings remain invariant (Kubernetes Volume Claims, Kubernetes Network settings, etc.).
    • Does not guarantee that only one Kubernetes Pod managed by the Kubernetes Stateful Set is deployed to each Kubernetes Node.
    • https://docs.rafay.co/learn/quickstart/kubernetes/deployments-daemonsets-statefulsets/
  2. Kubernetes Daemon Sets
    • Are distinct from Kubernetes Replica Sets but are akin in certain specific ways. (e.g. - can specify replicas and used to manage a group of Kubernetes Pods). Is not a subkind of rs.
    • Specifies that one, a particular, Kubernetes Pod must be present in all (or some specified) Kubernetes Nodes.
    • Ideal for observability tooling, logging, or Agent-based installations.
    • https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/

Commands

# Check the rollout status
kubectl rollout status deployment python-pyramid

# Rollback the update
## Note the /
kubectl rollout undo deployment/python-pyramid

https://kubernetes.io/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_undo/

  1. https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
  2. https://kubernetes.io/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_undo/
  3. https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/
  4. https://docs.rafay.co/learn/quickstart/kubernetes/deployments-daemonsets-statefulsets/
  5. https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/

Code samples:

  1. https://github.com/Thoughtscript/python_pyramid_kub_2024/tree/main/kubernetes

Linux Foundation CKA: Services

Maps Kubernetes Pod Ports to Node Ports by listening on specified Ports and forwarding them.

YAML

apiVersion: v1
kind: Service
metadata:
  name: myServiceName

spec:
  type: NodePort
  ports:
    - targetPort: 80
      port: 80
      nodePort: 31000 # 30000-32767
    - port: 8080
      # port is the only required field
      ## if solely provided, targetPort is assumed to be the same as port
      ## and an available port in range 30000-32767 is selected

  select:
    app: myPod # Match a pod
    type: someMetadataType

Kubernetes Ports:

  1. Inbound port is mapped to internal targetPort.
  2. nodePort is exposed externally as a Static IP Address.

Kubernetes Service YAML config type values:

  1. LoadBalancer
    • Load balances external traffic into the Kubernetes Cluster.
    • Requires that an external Load Balancer exists (such as an AWS ALB).
  2. ClusterIP
    • Default value.
    • Facilitates communications within the Kubernetes Cluster.
  3. NodePort
    • Enables a Kubernetes Node to be accessible from outside a Kubernetes Cluster.
  4. ExternalName
    • An internal Alias for an external DNS Name.
  5. None
    • A Headless Kubernetes Service.
    • Unlike the others, this isn't configured in the spec.type field.
    • Instead it's configured like so: spec.clusterIP: None.

Load Balancing

  1. A Kubernetes Service will Load Balance multiple Kubernetes Pods in the same Kubernetes Node (of the same Kubernetes Cluster).
  2. It will also do so for multiple Kubernetes Pods in different Kubernetes Node within the same Kubernetes Cluster.

DNS Names

  1. Internal to the same Kubernetes Namespace, Kubernetes Pods can access another Kubernetes Service by its name.
  2. Kubernetes Pods can access another Kubernetes Service in another Kubernetes Namespace by its fully qualified name: <SERVICE_NAME>.<NAMESPACE_NAME>.svc.cluster.local.

Commands

# Create a Service named redis-service with Pod redis associated on Port 6379
kubectl expose pod redis --port=6379 --name redis-service
  1. https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport

Linux Foundation CKA: Scheduling

Kubernetes Scheduling assigns Kubernetes Pods to Kubernetes Nodes.

Schedule Matching and Constraints

There are several considerations and ways that Kubernetes Schedules Pods:

  1. The inbuilt kube-scheduler typically and automatically manages assigning Kubernetes Pods to Kubernetes Nodes.
  2. Manually setting the nodeName field.
  3. nodeSelector matching against Node Labels.
  4. affinity
  5. Taints and Tolerations.
kube-scheduler
nodeName
nodeSelector
affinity
Taints and Tolerations
Default. No user control. Automatic. Managed Scheduling. Manual. Set in YAML or through JSON API. Associates a Node Label with a Pod nodeSelector. An affinity field is specified in Pod YAML. Taints are set on Nodes and Tolerations are set on Pods.
First available (given the other constraints). Most precise but requires exact Node name. Assigns by single Label (think CSS Selector). More precise than nodeSelector. Supports multiple Labels. Assigns a Taint to a Node that will be avoid by any Pod without a toleration.
Attracting association. Attracting association. Attracting association. Attracting or repelling association. Repelling association.
API-centric. Field. Key Value pair. Field. Key Value pair. Fields. Supports predicates, Labels, and complex expressions. Fields. Supports predicates and complex expressions.
Random. Exact Node. Restricted to range of Nodes. Restricted to narrower range of (or avoids certain) Nodes. Pods are restricted to range of Nodes.

Selectors and Labels

Selectors are used to pair Resources with other Resources.

Below, a Kubernetes ReplicaSet is associated with Kubernetes Pods with Label App1.

 apiVersion: apps/v1
 kind: ReplicaSet
 metadata:
   name: simple-webapp
   labels:
     app: App1
     function: Front-end
 spec:
  replicas: 3
  selector:
    matchLabels:
     app: App1 # Selector looks for matches on one of more labels
  template:
    metadata:
      labels:
        app: App1 # template labels specify the labels on each deployed Pod
        function: Front-end
    spec:
      containers:
      - name: simple-webapp
        image: simple-webapp   
# Display all Pods with the Label "label: example"
kubectl get po --selector label=example
# Display all Pods with the Labels "label: example" and "x: y"
kubectl get po --selector label=example,x=y

Manually Assign a Pod to a Node

To manually Schedule a Kubernetes Pod (assigning it to a Kubernetes Node).

# find all nodes
kubectl get nodes

Displaying:

NAME           STATUS   ROLES           AGE   VERSION
controlplane   Ready    control-plane   25m   v1.31.0
abcNode        Ready    <none>          24m   v1.31.0

Edit the config:

nano mypodconfig.yaml
# mypodconfig.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
  -  image: nginx
     name: nginx
  nodeName: abcNode # Add 
  ## kube-scheduler schedules a Pod by assigning it the field nodeNode with a valid Node 
  ## This can be done manually too!

Recreate the Pod:

# Delete and recreate in two commands
kubectl delete pod nginx
kubectl create -f mypodconfig.yaml

# Delete and recreate in one command
kubectl replace --force -f mypodconfig.yaml

nodeSelector

Labels are added to Kubernetes Nodes:

# Show all Labels
kubectl get nodes --show-labels

# Create a new Label on a Node
kubectl label nodes my-node-name x=y

nodeSelector is specified on Kubernetes Pods:

apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    env: test
spec:
  containers:
  - name: nginx
    image: nginx
    imagePullPolicy: IfNotPresent
  nodeSelector:
    disktype: ssd

Any Kubernetes Pod with a valid nodeSelector will be paired with an available Kubernetes Node with a matching Kubernetes Node label.

Affinity

  1. Supports complex, multi, Label associations.
  2. Supports Labels (see nodeSelector).
  3. Can define repellent Anti-Affinities.
apiVersion: v1
kind: Pod
metadata:
  name: with-pod-affinity
spec:
  affinity:
    podAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
          - key: security
            operator: In
            values:
            - S1
        topologyKey: topology.kubernetes.io/zone
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 100
        podAffinityTerm:
          labelSelector:
            matchExpressions:
            - key: security
              operator: In
              values:
              - S2
          topologyKey: topology.kubernetes.io/zone
  containers:
  - name: with-pod-affinity
    image: registry.k8s.io/pause:3.8

affinity goes under template in Kubernetes Deployment config YAML.

Taints and Tolerations

Kubernetes Taints restrict which Kubernetes Pods can be assigned to which Kubernetes Nodes (which Kubernetes Nodes can accept which Kubernetes Pods).

Kubernetes Taints and Tolerations do not guarantee that Kubernetes Pod will be assigned to a specific Kubernetes Node(s)!

  1. Kubernetes Taints are set on Kubernetes Nodes
    • kubectl taint nodes my-node-name key=value:taint-effect
    • If Kubernetes Pod isn't Tolerant (it's Intolerant), it's behavior is determined by a specified Taint Effect.
    • Taint Effects: NoSchedule, PreferNoSchedule, NoExecute.
    • A Taint is repelling - negatively associated.
  2. Kubernetes Tolerations are set on Kubernetes Pods
    • Must be coded with " " and viewed with |grep.
      apiVersion: v1
      kind: Pod
      metadata:
      name: myapp-pod
      spec:
      containers:
      - name: nginx-container
        image: nginx
      tolerations:
      - key: "app"
        operator: "Equal"
        value: "blue"
        effect: "NoSchedule"
  3. kubectl describe node myapp-node |grep Taint
# Taint
kubectl taint nodes controlplane x=y:NoSchedule

# View Taints
kubectl describe node controlplane |grep Taint

# Use the fully qualified name to remove (note the - at the end)
kubectl taint nodes controlplane x=y:NoSchedule-
kubectl taint nodes controlplane node-role.kubernetes.io/control-plane:NoSchedule-

Scheduling Algorithm

The default kube-scheduler uses the following two-step algorithm to assign Kubernetes Pods to Kubernetes Nodes:

  1. Filtering
    • Uses the affinity, Taints and Tolerations, nodeSelector, and so on to identify a range of Kubernetes Node to assign a Kubernetes Pod to.
  2. Scoring
    • Bin Packing - optimization scenario to maximize the number of "packages" that fit into a "bin" (hence "bin packing").
    • Determines available Resources and if the Kubernetes Pod can be added.
  1. https://kubernetes.io/docs/concepts/scheduling-eviction/
  2. https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler/
  3. https://kubernetes.io/docs/concepts/scheduling-eviction/resource-bin-packing/

Linux Foundation CKA: Resource Limits

Kubernetes supports setting Resource limits (primarily CPU, Memory).

Requests and Limits

  1. A LimitRange Request specifies the maximum that a Resource can ask for and have allotted to it.
  2. A LimitRange Limit is used to enforce minimum and maximum compute resource usages.

For example: you may want to specify a 500M memory Limit but a 250M memory Request to allow some extra memory buffer (pun) for other dependencies and services (including the Operating System itself).

Values

Standardized Resource values apply and simplifying resourcing whether a Container or Pod is run on a single-core, multi-core, or 48-core machine.

  1. cpu
  2. memory

YAML

Such constraints can be set as a standalone LimitRange Kind:

apiVersion: v1
kind: LimitRange
metadata:
  name: testLimit
  namespace: ns1
spec:
  limits:
    - default:
        cpu: 200m
        memory: 500m
      defaultRequest:
        cpu: 100m
        memory: 250m
      type: Container

The Kubernetes Limit Range is applied to a Kubernetes Namespace (and its Kubernetes Pods only indirectly) like so:

kubectl apply -f cpu-constraint.yaml --namespace=my-name-space

https://kubernetes.io/docs/concepts/policy/limit-range/

Spec Configuration

They can also be set within another Resource definition:

  1. Since Kubernetes 1.32, Pods can set this under spec.resources. This sets the overall limits for the entire Kubernetes Pod (and not just a single Container):

    apiVersion: v1
    kind: Pod
    metadata:
      name: pod-resources-demo
      namespace: pod-resources-example
    spec:
      resources:
        limits:
          cpu: "1"
          memory: "200M"
        requests:
          cpu: "1"
          memory: "100M"
      containers:
       - name: pod-resources-demo-ctr-1
         image: nginx
         resources:
           limits:
             cpu: "0.5"
             memory: "100M"
           requests:
             cpu: "0.5"
             memory: "50M"

    Note: This feature is Disabled by default and is in Alpha as of 12/2024.

  2. Via spec.containers[].resources which specifies limits for each Container:

    apiVersion: v1
    kind: Pod
    metadata:
      name: frontend
    spec:
      containers:
      - name: app
        image: images.my-company.example/app:v4
        resources:
          requests:
            memory: "64M"
            cpu: "250m"
          limits:
            memory: "128M"
            cpu: "500m"

https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container

  1. https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpu
  2. https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-memory
  3. https://kubernetes.io/docs/concepts/policy/limit-range/
  4. https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/
  5. https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/
  6. https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container

Linux Foundation CKA: Security

An excellent read: https://kubernetes.io/blog/2018/07/18/11-ways-not-to-get-hacked/ for Kubernetes security best practices.

Some Kubernetes security best practices:

  1. Disable Root Access.
  2. To close and/or disable all unneeded Container Ports.
  3. To disable any default password-based authentication (and use some stronger alternative).
  4. To move all Credentials and Secrets out of plaintext, configuration, and/or files into something more secure (third-party vault, temporary and in-memory tokens, etc.) as described here.
  5. To restrict access by RBAC and SSH Key only (Jump Bastion style).

TLS

  1. Using openssl one will typically generate an SSL Certificate in the following way:
    • A Key: .key
    • A Certificate Signing Request (CSR): .csr
    • Which in turn are used to sign and generate a Certificate: .crt, .pem (Privacy Enhanced Mail data and file format)
  2. The above steps are performed to generate a Certicate Authority Root Certificate that's used to generate valid Client Certificates.
    # simple openssl to generate root cert for Kubernetes Cluster CA
    openssl genrsa -out ca.key 2048
    # subj CN is Certificate Name
    openssl req -new -key ca.key -subj "/CN=KUBERNETES-CA" -out ca.csr
    openssl x509 -req -in ca.csr -signkey ca.key -out ca.crt
  3. Generate Certifcates for each Client and Signed with the Root Certificate and Key created above:
     openssl genrsa -out client.key 2048
     openssl req -new -key client.key -subj "/CN=my-client" -out client.csr
     # using the above root CA cert and key
     openssl x509 -req -in client.csr -CA ca.crt -CAkey ca.key -out client.crt
  4. It's good practice to generate a Certifcates for each Resource in the Kubernetes Cluster (in similar fashion):
    • So that all communication between Resources is encrypted.
    • Each Certificate can be associated with a single Client or kind of access aiding with proper RBAC.

https://docs.openssl.org/1.0.2/man1/x509/#signing-options

Multi-Cluster Config

Defines and controls access from one Kubernetes Cluster to others:

apiVersion: v1
kind: Config
preferences: {}

clusters:
- cluster:
  name: development
    # Here, the Authentication Strategy is through valid Certificates.
    certificate-authority: dev-ca-file
    server: https://5.6.7.8
- cluster:
  name: production
    # And here.
    certificate-authority: prod-ca-file
    server: https://1.2.3.4

contexts:
- context:
  name: development
  namespace: development
  user: dev-env-user
- context:
  name: production
  namespace: production
  user: prod-env-user

users:
- name: dev-env-user
  user:
    client-certificate: dev-cert-file
    client-key: dev-key-file
- name: prod-env-user
  user:
    client-certificate: prod-cert-file
    client-key: prod-key-file
kubectl config view

Kubernetes Users are not Kinds - they are not created. For example, there is no kubectl creater user username command.

https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/

Users and Service Accounts

Kubernetes Users, therefore, aren't represented by Kinds or standalone objects but only indirectly through the specified Authentication Strategy.

Kubernetes Service Accounts are managed through the Kubernetes API and are bound to specific Kubernetes Namespaces.

Role-Based Access Control

Unlike Kubernetes Users, Kubernetes Roles are bona fide objects, that are managed through Kubernetes API, and which have an associated Kind:

# dev-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: developer
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["list", "get", "create", "update", "delete"]
kubectl create -f dev-role.yaml

Kubernetes Roles are then bound (through a Kubernetes Role Binding) to a Kubernetes User:

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: dev-role-binding-for-dev-user
subjects:
# Not a Kubernetes Kind per se
- kind: User # field is kind but not object
  name: dev-user
roleRef:
  kind: Role
  name: developer

Kubernetes Secrets

Kubernetes Secrets are akin to Kubernetes Config Maps but are specifically intended to store Secrets.

  1. Values stored as such are unencrypted by default and anyone can view, modify, or retrieve them.
  2. Additional security is strongly encouraged per: https://kubernetes.io/docs/concepts/security/secrets-good-practices/

https://kubernetes.io/docs/concepts/configuration/secret/

Impersonation and Authorization

Authentication involves gaining, obtaining, or having access to some Resource.

Authorization involves what a Resource or User can do once they have authenticated.

Administrators can check whether other Users can perform certain actions like so:

kubectl auth can-i list pods --as=system:serviceaccount:dev:foo -n prod

Users can see if they can for themselves:

# Check to see if I can create pods in any namespace
kubectl auth can-i create pods --all-namespaces

https://kubernetes.io/docs/reference/kubectl/generated/kubectl_auth/kubectl_auth_can-i/

  1. https://kubernetes.io/blog/2018/07/18/11-ways-not-to-get-hacked/
  2. https://kubernetes.io/docs/reference/access-authn-authz/authentication/#authentication-strategies
  3. https://docs.openssl.org/1.0.2/man1/x509/#signing-options
  4. https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/
  5. https://kubernetes.io/docs/reference/access-authn-authz/authentication/#users-in-kubernetes
  6. https://kubernetes.io/docs/reference/access-authn-authz/rbac/
  7. https://kubernetes.io/docs/concepts/security/secrets-good-practices/
  8. https://kubernetes.io/docs/concepts/configuration/secret/
  9. https://kubernetes.io/docs/reference/kubectl/generated/kubectl_auth/kubectl_auth_can-i/

Linux Foundation CKA: Networking

DNS

  1. Kubernetess will automatically provision a DNS Server for a new Kubernetes Cluster.
  2. Kubernetes DNS will be managed and created for Kubernetes Pods and Kubernetes Services.
    • Kubernetes Services are automatically assigned a DNS Record.
    • Kubernetes Pods can be exposed directly or through a Kubernetess Service (and the way they are exposed determines what the DNS Record is)

Kubernetes DNS will create DNS Records and map IP Addresses like so:

  1. Kubernetes Service: my-svc.my-namespace.svc.cluster-domain.example
  2. Kubernetes Pod: pod-ipv4-address.my-namespace.pod.cluster-domain.example
    • By convention Kubernetes Pods replace dots with dashes in the IP Address.
  3. The above will be mapped to a Network IP Address.
  4. cluster-domain.example is typically thought of as the top-level Domain (despite the dot between cluster-domain and example) for all Resources within the Kubernetes Cluster:
    • The last item example can be thought of as the top-level Domain.
    • cluster-domain can be thought of as a Subdomain (created for the Kubernetes Cluster). Especially in a multi-Cluster deployment.
  5. svc or pod are properly part of the Domain and Subdomain Names and are dependent on the Kubernetes Kind the DNS Record is created for.

https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/

Container Network Interface

  1. Containers running in a Kubernetes Pod will communicate using a Container Network Interface (CNI).
  2. CNI Plugins are stored in the CNI bin directory as executables on the kubelet service in each Kubernetes Node of the Kubernetes Cluster.
  3. Each Kubernetes Node must have at least one CNI to connect to a network.
  4. Configuration is handled through CNI-specific JSON configuration files.

https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/

Pod Networking

  1. Kubernetes Pod networking requires a third-party tool and isn't available out of the box.
    • Flannel
    • Cilium
    • WeaveWorks
    • The above tools install Agents on each Kubernetes Pod to facilitate direct Kubernetes Pod to Kubernetes Pod intercommunication.
  2. Kubernetes Pod networking requirements:
    • Kubernetes Pods within the same Kubernetes Node should be able to communicate using their IP Addresses.
    • Each Kubernetes Pods should have a unique IP Address.
    • Kubernetes Pods should be able to communicate across different Kubernetes Nodes using their IP Addresses.
  3. Kubernetes Pods can only access a Kubernetes Service by its shorthand service name if they are in the same Kubernetess Namespace. (Otherwise, use the FQDN.)

Kubernetes Endpoints

Kubernetes Endpoints lack a glossary entry but are supported using kubectl syntax.

They appear to refer to an assigned IP Address and Port combination (e.g. - the traditional use of the term as opposed to say an AWS API Gateway Endpoint which associates a specific DNS, URL Context Path, and API REST Method). As such, Kubernetes Endpoints can be associated with Kubernetes Pods or external Resources.

Example Minikube:

kubectl get endpoints

# NAME         ENDPOINTS           AGE
# kubernetes   192.168.49.2:8443   5h12m

Network Policies

  1. Ingress Policies - specify rules for inbound traffic.
  2. Egress Policies - specify rules for outbound traffic.

Configured through the NetworkPolicy Kind that's associated with one or more Kubernetes Pods through podSelector matchLabels.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-all
spec:
  podSelector:
    matchLabels:
      app: myapp
  policyTypes:
  - Ingress
  - Egress

A variety of values can be supplied:

  1. Default Policies:
    • default-deny-ingress
    • allow-all-ingress
    • default-deny-egress
    • allow-all-egress
    • default-deny-all
  2. Kubernetes Namespaces through the namespaceSelector field.
  3. Port ranges.

https://kubernetes.io/docs/concepts/services-networking/network-policies/

Ports

Important Kubernetes Port numbers: https://kubernetes.io/docs/reference/networking/ports-and-protocols/

Port(s) or Range Resource Required Description
2379, 2380 etcd, also kube-apiserver Yes. (Inbound.) etcd default listening and API.
6443 kube-apiserver Yes. (Inbound.) Used by everything.
10250 kubelet/kubelet API Yes. (Inbound.) Used on Control Plane and kubelet on all worker Nodes.
10256 kube-proxy Yes. (Inbound.) Load balancers on worker Nodes.
10257 kube-controller-manager Yes. (Inbound.) HTTPS on Control Plane.
10259 kube-scheduler Yes. (Inbound.) Default listening on Control Plane.
30000-32767 Everything. Yes. (Inbound.) NodePort Services on all worker Nodes.
  1. https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
  2. https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/
  3. https://kubernetes.io/docs/concepts/services-networking/network-policies/
  4. https://stackoverflow.com/questions/52857825/what-is-an-endpoint-in-kubernetes
  5. https://kubernetes.io/docs/reference/networking/ports-and-protocols/

Linux Foundation CKA: Storage

Volumes and Volume Claims

  1. Kubernetes Volume Claims are singly attached to a Kubernetes Volume:
    • Only one Kubernetes Volume Claim can bind to a Kubernetes Volume.
    • Kubernetes Volumes without a Kubernetes Volume Claim are Unbound.
    • Kubernetes Volume Claims that have no available Kubernetes Volumes remaining in a Pending state until a new Kubernetes Volume becomes available.
  2. PersistentVolumeClaim, pvc is the Kubernetes Volume Claim Kind
  3. PersistentVolume, pv is the Kubernetes Volume Kind.

Kubernetes Volume example YAML:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: postgres-volume
  labels:
    type: local
    app: python-pyramid-postgres
    service: postgres
spec:
  storageClassName: local-storage
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteMany
  hostPath:
    path: /data/postgresql

Kubernetes Volume Claim example YAML:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: postgres-volume-claim
  labels:
    type: local
    app: python-pyramid-postgres
    service: postgres
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 4Gi
  volumeName: postgres-volume
  storageClassName: local-storage

Reclaimation Policies

https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaiming

Kubernetes Volume Reclaimation Policies specify what action(s) should occur when an associated Kubernetes Claim is Deleted:

  1. Retain
  2. Recycle
  3. Delete

Access Modes

Kubernetes Volume access modes:

  1. readonlymany
  2. readwriteonce
  3. readwritemany
  1. https://kubernetes.io/docs/concepts/storage/persistent-volumes/
  2. https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaiming

Code samples:

  1. https://github.com/Thoughtscript/python_pyramid_kub_2024/tree/main/kubernetes

Linux Foundation CKA: Auto-Scaling

  1. Kubernetes Auto-Scaling is managed by dedicated Kubernetes Pods equipped with Controllers that track specified Metrics.
    • Recommendations are then made on the basis of those Metrics and other specified Resource Limits/configurations.
    • Default Resource Limits and/or Kubernetes Pod counts are intercepted and overridden based on those recommendations.
  2. Kubernetes Auto-Scaling comes in three varieties:
    • Vertical Pod Autoscaler (VPA) for scaling the number of Kubernetes Pods that exist in a Kubernetes Deployment.
    • Horizontal Pod Autoscaler (HPA) for dynamically modifying CPU and Memory Requests based on actual use.
    • Cluster Autoscaler manages and scales Kubernetes Nodes based on use and demand.
VPA HPA Cluster Autoscaler
Resource Requests Pods Nodes
CPU, RAM Number of Pods Number of Nodes

Vertical Pod Autoscaler

Specifies how CPU and Memory Requests are updated based on actual use.

  1. Must first be independently installed
    • Download and run the install script: ./hack/vpa-up.sh
  2. Given an existing Kubernetes Deployment (say, my-example-deployment), kubectl apply -f vpa-example.yaml using a YAML configuration:
    # vpa-example.yaml
    apiVersion: autoscaling.k8s.io/v1
    kind: VerticalPodAutoscaler
    metadata:
      name: my-example-deployment-vpa
    spec:
      targetRef:
        apiVersion: "apps/v1"
        kind:       Deployment
        # Associate the HPA with the Deployment
        name:       my-example-deployment
      updatePolicy:
        # Take care here...
        updateMode: "Auto"
  3. updateMode can be set in four modes:
    • "Auto"
    • "Recreate"
    • "Initial"
    • "Off"
  4. kubectl get vpa to retrieve information about VerticalPodAutoscaler objects.

https://kubernetes.io/docs/concepts/workloads/autoscaling/#scaling-workloads-vertically

Horizontal Pod Autoscaler

Specifies a minReplicas to maxReplicas ranged controlled by specified metrics along with automatic auto-scaling to and from those quantities.

Given an existing Kubernetes Deployment (say, my-example-deployment):

  1. Create an HPA imperatively: kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10
  2. Here's the declarative way to do so kubectl apply -f hpa-example.yaml using YAML:
    apiVersion: autoscaling/v2
    kind: HorizontalPodAutoscaler
    metadata:
      name: my-example-deployment-hpa
    spec:
      scaleTargetRef:
        apiVersion: apps/v1
        kind: Deployment
        # Associate the HPA with the Deployment
        name: my-example-deployment
      minReplicas: 1
      maxReplicas: 10
      metrics:
      - type: Resource
        resource:
          name: cpu
          target:
            type: Utilization
            averageUtilization: 50
  3. kubectl get hpa to retrieve information about HorizontalPodAutoscaler objects.

Note: Kubernetes Replica Set replicas values are overridden by HPA minReplicas and maxReplicas. The number specified by replicas is provisioned first, and is then adjusted by HPA to the desired min and max amounts.

https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/

Cluster Autoscaler

Typically used with a specific Cloud Provider.

Manages the number of and scaling for Kubernetes Nodes.

https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler

https://medium.com/tensult/cluster-autoscaler-ca-and-horizontal-pod-autoscaler-hpa-on-kubernetes-f25ba7fd00b9

  1. https://github.com/kubernetes/autoscaler/blob/master/vertical-pod-autoscaler/docs/installation.md
  2. https://technologyconversations.com/2018/10/10/to-replicas-or-not-to-replicas-in-kubernetes-deployments-and-statefulsets/
  3. https://stackoverflow.com/questions/66431556/what-is-the-relationship-between-the-hpa-and-replicaset-in-kubernetes
  4. https://github.com/kubernetes/autoscaler/blob/master/vertical-pod-autoscaler/docs/quickstart.md
  5. https://kubernetes.io/docs/concepts/workloads/autoscaling/#scaling-workloads-vertically
  6. https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/
  7. https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler
  8. https://medium.com/tensult/cluster-autoscaler-ca-and-horizontal-pod-autoscaler-hpa-on-kubernetes-f25ba7fd00b9

Linux Foundation CKA: GitOps

GitOps: tracking, versioning, and handling configuration code for infrasture.

Versioning

Versioning (in the context of Kubernetes) typically involves:

  1. Versioning Docker Images - handled through one's Docker Repository and Dockerfile (including but not limited to Docker Hub).
  2. Versioning of Kubernetes configuration - typically handled indirectly (through Docker Image versions mentioned above) or through a dedicated Package Manager like Helm.
    • Kubernetes Labels might also suffice but I haven't personally seen them used for that purpose.

Helm

Helm is the primary Package Manager for Kubernetes.

Packages are organized into Helm Charts that add to standard Kubernetes YAML configuration templating features, built-in functions, more flexible syntax, etc.:

apiVersion: v1
kind: ConfigMap
metadata:
  name: {{ .Release.Name }}-configmap
data:
  myvalue: "Greetings Celestial Sphere!"

Push vs. Pull Approachs

Both approaches:

A good read: https://thenewstack.io/push-vs-pull-in-gitops-is-there-really-a-difference/

Benefits of GitOps CI/CD

  1. Can use Git to track both Source Code and infrastructure configuration.
  2. Supports automated rollbacks (to specific Git Commits, Tags, or prior Releases).
  3. Git is already used in most CI/CD Pipelines.

CI/CD Tools

Continuous Improvement is typically implemented using either:

CI/CD tools specific to Kubernetes:

https://dev.to/ariefwara/jenkins-argo-cd-4ld5

  1. https://thenewstack.io/push-vs-pull-in-gitops-is-there-really-a-difference/
  2. https://helm.sh/
  3. https://helm.sh/docs/chart_template_guide/getting_started/
  4. https://fluxcd.io/flux/
  5. https://argo-cd.readthedocs.io/
  6. https://earthly.dev/blog/flux-vs-argo-cd/
  7. https://blog.aenix.io/argo-cd-vs-flux-cd-7b1d67a246ca
  8. https://dev.to/ariefwara/jenkins-argo-cd-4ld5

Linux Foundation CKA: Useful Commands

Commands that are useful for the exam itself.

Core Commands

# Switch Namespace context
kubectl config set-context --current --namespace=test

# Get all Resources
kubectl get all

# Get all Pods regardless of Namespace
kubectl get po --all-namespaces
## Inspect specific Pod
kubectl get pod mypod
### Detailed info
kubectl describe pod mypod 

# Dry run/don't actually create and output a YAML file with filename
kubectl create deployment --image=nginx nginx --dry-run=client -o yaml > nginx-deployment.yaml

# See the YAML config for a Resource, edit it live w/out saving, and automatically apply those changes
kubectl edit rs new-replica-set

# Create or update Pod from a file (see below)
kubectl apply -f pod-definition.yaml
## kubectl apply is often preferred over create and/or replace

# Create and run a Pod named nginx using the Image nginx w/out a config file
kubectl run nginx --image=nginx

# Print out config to YAML
kubectl get deploy nginx -o yaml > nginx.yaml

# Create a Service named redis-service with Pod redis associated on Port 6379
kubectl expose pod redis --port=6379 --name redis-service
kubectl expose pod httpd --port=80 --name=httpd --type=ClusterIP

# Check to see if I can create pods in any namespace
kubectl auth can-i create pods --all-namespaces

# Show all Labels
kubectl get nodes --show-labels
# Create a new Label on a Node
kubectl label nodes my-node-name x=y

# Add a taint
kubectl taint nodes controlplane x=y:NoSchedule
# Remove a taint
kubectl taint nodes controlplane x=y:NoSchedule-
# View taints
kubectl describe node controlplane |grep Taint
## These are present in `kubectl describe` 

Techniques

If a YAML is needed as a baseline one can be generated like so:

# Print out config to YAML 
kubectl get deploy nginx -o yaml > nginx.yaml
# Then override and update current Resources 
kubectl replace -f nginx.yml --force

Sometimes a field is protected (containers[0].name). This can nevertheless be updated by recreating:

# Edit some Resource
kubectl edit pod my-pod
# Even if it errors out a copy is saved to /tmp/
## /tmp/kubectl-edit-12345667.yml

# Update through recreation
kubectl delete pod my-pod
kubectl apply -f /tmp/kubectl-edit-12345667.yml
touch my.yaml
vi my.yaml # Vim if nano not present in env
nano my.yaml # Nano text editor

Terraform: General Concepts

General notes and prep for HCTA0-003.

Configuration Language

  1. Uses a consistent (uniform) vocabulary to configure deployments and resources.
    • (Input, Data, Local) Variables, Resources, Providers
  2. Abstracts away many otherwise manual CLI operations.
  3. Allows configuration reuse (through Modules and imports).
  4. Typed configuration helps to validate and avoid errors/mistakes.
    • Built in formatting and file validation.

Providers

  1. (Mostly) Cloud Agnostic - the Configuration Language itself is Cloud Agnostic but you'll need to hook in Cloud-specific Providers.
  2. Providers allow the same Configuration Language to deploy resources to many cloud environments (and even locally, on Docker).
  3. Providers typically need an underlying CLI or SDK (such as awscli).

Stateful

  1. Terraform uses the root of a dir as its Context.
  2. Any file that's imported or present at root dir will be evaluated or considered automatically.
  3. Terraform State tracks the definitions provided for within the .tf configs.
    • Also data Variables retrieved from a Cloud Provider
    • And the live status of any resources created, updated, running, or destroyed within the Cloud by those .tf configs.

Interactive

Supports interactive Shell commands:

  1. terraform fmt - will format .tf files
  2. terraform validate - will validate .tf returning Success! The configuration is valid.
  3. terraform apply - deploys resources specified, configured, and defined in .tf files.
  4. terraform plan - prints out the specified, configured, and defined resources without deploying them. Output can be saved into a file.
  5. terraform show - displays the current State of the Terraform Context.
  1. https://developer.hashicorp.com/terraform/tutorials/certification-003/associate-review-003
  2. https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli
  3. https://registry.terraform.io/providers/hashicorp/aws/latest/docs

Code samples:

  1. https://github.com/Thoughtscript/terraform_2024

Terraform HCTA0-003: General Concepts

Backend

  1. Terraform Backends specify where Terraform stores its State File.
  2. Storing a State File in a remote location is a beneficial practice that can improve security.
  3. Some Backends support multiple Workspaces.

A Backend is configured within the terraform block with the keyword backend:

terraform {
  backend "remote" {
    organization = "example_corp"

    workspaces {
      name = "my-app-prod"
    }
  }
}

https://developer.hashicorp.com/terraform/language/backend

Workspaces

  1. Support multiple deployments of the same configuration.
  2. Each Workspace has its own State File.
  3. If a Backend is configured, the State File will be created using the configured (remote) Backend.
terraform workspace new MY_WORKSPACE
terraform workspace select MY_WORKSPACE
terraform workspace show
terraform workspace list

https://spacelift.io/blog/terraform-workspaces

https://developer.hashicorp.com/terraform/language/state/workspaces

State

  1. Community Terraform will persist Secrets to State in a plaintext and unecrypted way (although this can be masked in say the Terminal).
  2. HCP Terraform will encrypt Secrets in State at rest.
  3. Saved into a local terraform.tfstate file in the default local directory: terraform.tfstate.d/

Default Files and Directories

Files created by and used within Terraform:

  1. .terraform.tfstate local State file saved into the default local directory: terraform.tfstate.d/
  2. .terraform/providers is the default location where Providers are downloaded, saved, and cached locally.
  3. secret.tfvars file where Secrets and sensitive values are stored.
  4. credentials.tfrc.json - where API Tokens retrieved from terraform login are stored.

Important TF Variables

  1. TF_VAR - Affixed. Format: TF_VAR_name.
    • Specifies a Terraform Environment Variable.
    • Note: it's not TF_ENV or TF_ENV_VAR!
  2. TF_LOG - Specifies the Logging Visibility Level.
    • Example: TF_LOG=TRACE

https://developer.hashicorp.com/terraform/cli/config/environment-variables

Logging Visibility Levels

In descending order:

  1. TRACE
  2. DEBUG
  3. INFO
  4. WARN
  5. ERROR
  6. OFF

User-Defined Functions

  1. Terraform doesn't presently support User-Defined Functions:

    "The Terraform language does not support user-defined functions, and so only the functions built in to the language are available for use."

  2. One can combine pre-existing, in-built, Functions in creative ways but one cannot define say a Function or Function name.

https://developer.hashicorp.com/terraform/language/functions

  1. https://developer.hashicorp.com/terraform/language/backend
  2. https://developer.hashicorp.com/terraform/language/state/workspaces#backends-supporting-multiple-workspaces
  3. https://spacelift.io/blog/terraform-workspaces
  4. https://developer.hashicorp.com/terraform/language/state/workspaces
  5. https://developer.hashicorp.com/terraform/language/functions
  6. https://developer.hashicorp.com/terraform/cli/commands/refresh
  7. https://developer.hashicorp.com/terraform/cli/workspaces#workspace-internals
  8. https://developer.hashicorp.com/terraform/cli/config/environment-variables

Code samples:

  1. https://github.com/hashicorp/terraform-guides/tree/master

Terraform HCTA0-003: Sensitive Values

Secrets

Terraform supports the following ways to store and manage Secrets:

  1. secret.tfvars file
  2. sensitive function
  3. sensitive keyword

Secrets and State

  1. Secrets are always saved/stored in a State File as plaintext.
  2. HCP Terraform will encrypt the State File itself and allow it to be saved to a remote location. Secrets will nevertheless be persisted in plaintext within that State File.
  3. The sensitive keyword only suppresses/masks a value within the console. (It doesn't prevent the value from being persisted into a State File as plaintext.) (See below for more details.)

Sensitive Function

The sensitive function:

  1. Treats a value as if it were marked with the sensitive keyword (below).
  2. Allows values to be stored elsewhere (outside of a particular configuration file) in a manner akin to Pickling/Unpickling (in Python) or the serialized keyword (in Java).
  3. The actual value will nevertheless be visible in the local State File and in some scenarios (like certain logging messages).
locals {
  sensitive_content = sensitive(file("${path.module}/sensitive.txt"))
}

Sensitive Keyword

The sensitive keyword suppresses the value of a Variable from a terraform plan or terraform apply output:

variable "user_information" {
  type = object({
    name    = string
    address = string
  })
  sensitive = true
}

There are some important limitations:

  1. sensitive values may still appear in error messages or logs (regardless of their being suppressed in the CLI).
  2. Any value that relies on a sensitive value will likewise be treated as sensitive.
  3. A sensitive value may still be used or accessed by a person with the ability to do so.
    • Access is not controlled or restricted by the sensitive keyword.
  4. sensitive values are nevertheless persisted into a State File if no other backend or store has been configured.
    • These values will be both persisted in a local file and unencrypted!
    • So for Production environments, one will want to use something like HashiCorp Vault or AWS KMS (with at-rest encryption).
  5. An entire block of values may be suppressed if the block is considered sensitive.
    • Use the keyword sparingly and precisely.

https://developer.hashicorp.com/terraform/language/values/variables#suppressing-values-in-cli-output

View Sensitive Value

You may need to see a sensitive value in the CLI:

  1. terraform output 'MY_RESOURCE_KIND.MY_RESOURCE_NAME'
  2. terraform show or terraform show -json

Know too that the above represent limitations to the level of secrecy afforded by Terraform out-of-the-box.

  1. https://developer.hashicorp.com/terraform/language/functions/sensitive
  2. https://developer.hashicorp.com/terraform/language/values/variables#suppressing-values-in-cli-output
  3. https://developer.hashicorp.com/terraform/tutorials/certification-associate-tutorials-003/sensitive-variables

Terraform HCTA0-003: Syntax

Dyamic Blocks

  1. dynamic blocks support for_each functionality when used in tandem with a list Type.

Example:

variable "settings" {
  type = list(map(string))
}

resource "aws_elastic_beanstalk_application" "tftest" {
  name        = "tf-test-name"
  description = "tf-test-desc"
}

resource "aws_elastic_beanstalk_environment" "tfenvtest" {
  name                = "tf-test-name"
  application         = "${aws_elastic_beanstalk_application.tftest.name}"
  solution_stack_name = "64bit Amazon Linux 2018.03 v2.11.4 running Go 1.12.6"

  dynamic "setting" {
    for_each = var.settings
    content {
      namespace = setting.value["namespace"]
      name = setting.value["name"]
      value = setting.value["value"]
    }
  }
} 

https://developer.hashicorp.com/terraform/language/expressions/dynamic-blocks

Splat Expressions

  1. Consider the following valid Terraform Comprehension-esque Expression: [for o in var.list : o.id].
  2. It's equivalent to the more succinct Splat Expression: var.list[*].id.

https://developer.hashicorp.com/terraform/language/expressions/splat

  1. https://developer.hashicorp.com/terraform/language/expressions/dynamic-blocks
  2. https://developer.hashicorp.com/terraform/language/expressions/splat

Code samples:

  1. https://github.com/hashicorp/terraform-guides/tree/master

Terraform HCTA0-003: Modules

It might be intuitive to think of a Module that calls or references another as Subclassing the other. That's not the case:

  1. A Child Module is a Module that's called or referenced (through a source value) from another Module.
  2. The calling Module is the Parent Module.

Example:

# Must be named or have a main.tf in Module dir.
## Helpful AWS examples: https://github.com/terraform-aws-modules/terraform-aws-ecs/tree/master/modules/container-definition

# Input Variables
variable "policy_arn" {
  description = "Name of specified policy"
  type        = string
}

variable "policy_attachment_name" {
  description = "Name of policy attachment"
  type        = string
}

variable "input_aws_iam_user_name" {
  description = "Supplied AWS IAM User"
  type        = string
}

resource "aws_iam_policy_attachment" "this" {
  name       = var.policy_attachment_name
  users      = [var.input_aws_iam_user_name]
  policy_arn = var.policy_arn
}

# Output Variables
output "id" {
  description = "ID of the policy"
  value       = aws_iam_policy_attachment.this.id
}

https://github.com/Thoughtscript/terraform_2024/blob/main/terraform/modules/policy/main.tf

https://developer.hashicorp.com/terraform/tutorials/certification-associate-tutorials-003/module-create

https://developer.hashicorp.com/terraform/language/modules

Input Variables

  1. Modules can define Input Variables that are Parameterized when the Module is imported and used.
  2. These are akin to an Argument or Parameter in Object Oriented Programming Langauges.
  3. Input Variables use the same variable block syntax as typical Variables but typically lack a value and typically have a default field.

https://developer.hashicorp.com/terraform/language/modules

Exporting Variables

  1. A Module won't automatically have access to all the Variables of a child or called Module, only the exported ones.
  2. Exported Variables are called Output Variables.
  3. These are often placed in an outputs.tf file (within a Module directory).

These are defined using output blocks like so:

output "instance_ip_addr" {
  value = aws_instance.server.private_ip
}

https://developer.hashicorp.com/terraform/language/values/outputs

  1. https://developer.hashicorp.com/terraform/tutorials/certification-associate-tutorials-003/module-create
  2. https://developer.hashicorp.com/terraform/language/modules
  3. https://developer.hashicorp.com/terraform/language/values/variables
  4. https://developer.hashicorp.com/terraform/language/values/outputs

Code samples:

  1. https://github.com/Thoughtscript/terraform_2024/blob/main/terraform/modules/policy/main.tf
  2. https://github.com/hashicorp/terraform-guides/tree/master

Terraform HCTA0-003: Resource Drift

Importing

Terraform recommends using the import keyword to indicate that a Resource will come under Terraform management following terraform apply.

import {
  to = aws_instance.example
  id = "i-abcd1234"
}

resource "aws_instance" "example" {
  name = "hashi"
  # (other resource arguments...)
}

https://developer.hashicorp.com/terraform/language/import

https://developer.hashicorp.com/terraform/tutorials/certification-associate-tutorials-003/resource-drift#import-the-security-group

Refresh

Some HCTA0-03 questions asked about terraform refresh which wasn't covered in the Udemy course I studied from.

  1. terraform refresh is now deprecated.
  2. Is an alias for terraform apply -refresh-only -auto-approve.
  3. terraform apply -refresh-only is now officially recommended and prefered.

https://developer.hashicorp.com/terraform/cli/commands/refresh

  1. https://developer.hashicorp.com/terraform/language/import
  2. https://developer.hashicorp.com/terraform/tutorials/certification-associate-tutorials-003/resource-drift#import-the-security-group
  3. https://developer.hashicorp.com/terraform/cli/commands/refresh

Terraform HCTA0-003: Commands

The big six:

Initialization and Setup

# Fetch Providers, dependencies, and external Modules
terraform init
# Download Modules or update them
## Akin to to 'npm install'
terraform get

# Just update the existing ones
terraform get -update

Workspaces

# Display all Workspaces
terraform workspace list
# Create a new Workspace MY_WORKSPACE_NAME
terraform workspace new MY_WORKSPACE_NAME
# Switch to an existing Workspace MY_WORKSPACE_NAME
terraform workspace select MY_WORKSPACE_NAME

Formatting and Validation

# Format Terraform files
terraform fmt

# Also subdirectories
terraform fmt --recursive
# Validate Terraform files for internally consistency
## Naming, proper initialization, etc.
terraform validate

Viewing State

# List the contents of State
terraform state list

# Display a single Resource in State
terraform state show 'MY_RESOURCE_TYPE.MY_RESOURCE_NAME'
## 'terraform show' doesn't allow this level of granularity
# Refresh State
terraform refresh

## The above is an Alias for:
terraform apply -refresh-only -auto-approve
# Display the Output Variables and Values of a Module and/or Resource
terraform output

Planning

# Refreshes State and creates the Execution Plan
## Without modifying the actual Resources
## Dry Run
terraform plan

## Create a Destroy Plan only
terraform plan -destroy

Modifying Resources

# Refresh State and Apply changes to actual Resources!!
terraform apply
# Destroy all Resources!!
terraform destroy

## Alias for:
terraform apply -destroy
# Replace a specific Resource
## Destroy and recreate it
terraform apply -replace="MY_RESOURCE_KIND.MY_RESOURCE_NAME"
## This supersedes 'terraform taint'

Unlock State

# Force State to be unlocked
terraform force-unlock

Console and Saving Outs

# Display State or Execution Plan as JSON
terraform show -json
## For Resource specific granulatity - 'terraform state show ...'

# Print out State or Execution Plan as JSON File
terraform show -json MY_FILENAME

# Print Execution Plan out
terraform plan -out=my/path/file.json
# Evaulate Expressions and Std Out.
terraform console

Credentials and Sensitive Values

# Log into HCP Terraform and obtain an API Token
terraform login

## Logout of HCP Terraform
terraform logout
# Reveal all Values included Sensitive ones
terraform show

# Display a potentially Sensitive value from an Output
terraform output 'MY_RESOURCE_KIND.MY_RESOURCE_NAME'
  1. https://developer.hashicorp.com/terraform/language/values
  2. https://developer.hashicorp.com/terraform/cli/commands/output

PSPO I: Overview

Some notes I took before taking the Scrum.org - Professional Scrum Product Owner I (PSPO I) Exam.

Agile and Scrum

Typically encountered hand-in-hand (to the point that they are often used as synonyms):

The official Scrum Guide.

Not all Agile approaches involve Scrum, however (e.g. - Extreme Programming).

This approach contrasts with Project Management in the following ways:

Agile Manifesto

  1. Individuals and Interactions over Process and Tools
  2. Working Software over Comprehensive Documentation
  3. Customer Collaboration over Contract Negotiation
  4. Responding to Change over Following a Plan

The Scrum Values

  1. Courage
  2. Focus
  3. Commitment
  4. Respect
  5. Openness
  1. https://www.scrum.org/professional-scrum-competencies/understanding-and-applying-scrum-framework
  2. https://www.scrum.org/resources/scrum-values
  3. https://scrumguides.org/scrum-guide.html

PSPO I: Scrum

Scrum Framework

Often encountered in Software Development, can and is also used widely to manage and organize Work (of nearly any kind).

Scrum Pillars

Scrum Empiricism / Evidence Based Management:

  1. Measures, quantifies, observes Product value added, unrealized.
  2. Frequently inspects Product and market needs.
  3. Uses evidence to define Product Vision.

Pillars of this approach:

  1. Transparency - about their Work and Work Progress.
  2. Inspect - regularly review and inpsect their Work and Work Progress.
  3. Adapt - be flexible and willing to adapt their Work to fit changing demands.

Scrum Team

Core Team:

  1. Product Owner
    • Uses Empirical (Data Driven, Observational, Experimental) approaches to help determine what should or needs to be done on behalf of Customers and Stakeholders.
    • Product Vision determines
  2. Scrum Master
    • Ensures conformance to the Scrum Methodology.
    • Guides Stand.
  3. Scrum Developers
    • Build and create the Product.
    • Does not necessarily mean "Software Developer"!
    • Work collaboratively, cross-functionally, and as a team sharing their individual competencies/using their collective skills to accomplish Work.
    • A Product Owner can be a Scrum Developer too.
    • Typically, 5 (3-10 Developers).

Also:

  1. Stakeholders, Customers

Scrum Artifacts

Core Artifacts:

  1. Product Backlog - a Product goal that may cover multiple Sprints.
  2. Sprint Backlog - the Sprint goal.
  3. Increment - what the Scrum Team is working toward.
    • There is only one Increment at a time.
    • Any time a Product or Product Vision is updated, the Increment changes to that.
    • Excludes any previously completed work that isn't the most recent Product goal.

Also:

  1. User Stories - used to define and organize Work.
    • Are included in Product and Sprint Backlogs.
  2. Kanban Board - used to track and monitor Work Progress, Sprints.

Scrum Events

Event Description Present
Sprint Planning Sprint Goal is defined by Team. Items are selected for completion based on work estimates. Product Owner (Participates), Scrum Master (Participates), Developers (Participates), Stakeholders (Can Be Present)
Sprint Items selected from Team to be worked on toward the next Increment or Sprint Goal. Product Owner, Developers
Daily Scrum What got done, what will be done, what's blocking folks. Product Owner (Can Be Present), Scrum Master (Facilitates), Developers (Participates)
Sprint Review Demonstration of completed items. Product Owner, Scrum Master, Developers, Stakeholders
Sprint Retrospective Reviews what can be improved, what went well, what could have gone better, etc. Product Owner, Scrum Master, Developers
  1. Sprints
    • A Sprint goal is defined and the Scrum Team determines how best to accomplish that.
    • Work is organized into (typically) multi-week Sprints with each Scrum Developer commiting to an assigned/selected amount of Work.
    • Work commitments include important considerations like documentation, testing, deployments, compliance, coding, etc.
  2. Sprint Planning (Pointing, Grooming)
    • Creating User Stories - User Stories are defined from the perspective of a Stakeholder (or Developer) and assigned some number of Points indicating the relative effort or complexity to complete that Work.
      • Points are used to estimate the amount of Work Scrum Developers will commit to during a Sprint.
      • These need not correspond to a specific unit of time (although the number of Points is used to determine the amount of Work a Developer can or will commit to during a specified amount of time).
      • Specific Work items have Acceptance Criteria clearly defined to determine the status and completion of an item.
    • Backlog Refinement - Work is constantly being reviewed and new Work items are moved both into Backlog and current/future Sprints.
  3. Daily Scrum (Stand)
    • Guided by Scrum Master
    • Typically involves: What I got done yesterday, What I'm going to do today, and Here are my blockers succinctly conveyed in <3 minutes per Scrum Developer.
    • May be followed by some kind of impromptu meeting to address critical concerns and inform key Stakeholders (per Agile).
  4. Sprint Review (Acceptance)
    • Tasks and Sprints are reviewed to determine what was completed.
    • Now often involves a Presentation or Demostration of the completed Work to Stakeholders.
  5. Sprint Retrospectives (Retros)
    • Scrum Core Team discusses what went great, wrong, etc. during the last Sprint and determines ways to improve going forward.
  1. https://www.scrum.org/learning-series/scrum-team
  2. https://www.scrum.org/learning-series/scrum-events/

PSPO I: Definition of Done

Done in Scrum

  1. The definition of Done (within the context of Scrum) is: the Increment has achived a sufficient level of quality making it suitable for review from Stakeholders.
    • This level of quality should make it sufficient to release to Customers.
    • Furthermore, the aim is to create a Product that's truly beneficial to Customers not just something that's on time and on budget.
  2. The same Definition of Done should be used throughout the Product (e.g. - every Team working on the same Product Backlog.)
  3. The Developers determine what the Definition of Done should be (not the Product Owner).

So, the Definition of Done should be the primary barometer of Work completion. Generally, the Definition of Done should enable a Scrum Team to release the Increment to Stakeholders (and the Increment should be of value to them).

As one refines the Definition of Done, they should keep that guiding principle in mind.

Best Practices

  1. https://www.scrum.org/learning-series/definition-done/characteristics-of-the-definition-of-done/characteristics-of-a-good-dod
  2. https://www.scrum.org/learning-series/definition-done/characteristics-of-the-definition-of-done/dod-characteristics-to-avoid
  1. https://www.scrum.org/learning-series/definition-done/frequently-asked-questions-about-being-done/what-does-it-mean-to-be-done-in-scrum-
  2. https://www.scrum.org/learning-series/definition-done/characteristics-of-the-definition-of-done/characteristics-of-a-good-dod
  3. https://www.scrum.org/learning-series/definition-done/characteristics-of-the-definition-of-done/dod-characteristics-to-avoid

PSPO I: Timeboxing

Sprint Backlog

Remember:

  1. The Product Owner has the ultimate say over the Sprint Backlog.
  2. But Developers Point Work, can add to the Backlog, and are primarily responsible for determing Work Progress toward the Increment/Sprint goal.

Monthly Max Time Allocation

Scrum Event Max Time Allocation
Backlog Refinement No more than 10% of Developer capacity.
Sprint Planning 8 Hours Monthly for Sprints.
Sprint Review 4
Sprint Retrospective 3 Hours Monthly.

By following the above general guidelines, one will likely reduce the amount of time allocated to other, informal, adhoc meetings.

From a variety of sources including this very helpful Udemy course - Section 4. Scrum Events.

Cone of Uncertainty

Generally, the ability to accurately predict the cost of a change improves over time.

This is typically charted as a cone representing information entropy, risk, or uncertainty that narrows (representing the increased familiarity, predictive power, improved planning, tooling, etc. that reduces risk over time).

Udemy course - Section 6. Cone of Uncertainty

  1. https://www.udemy.com/course/product-owner-course/learn/lecture/21521908#overview

PSPO I: Topic Review

Role/Item Verb Item
Developers defines the Definition of Done
Developers manage the Sprint Backlog
Developers estimate all Backlog (Sprint, Product, Team) item efforts
Sprint Backlog items subset of the Product Backlog
The Product Backlog is the source of truth for Product requirements
Product Owner manage the Product Backlog
Product Owner determines Product releases
Stakeholders present at Sprint Planning, Sprint Review
The Sprint Goal provides guidance about the current Increment

Topics for review.

  1. Sprint Backlog items are a subset of all items in the Product Backlog.
    • In practice, Sprint Backlog is often kept distinct from Product Backlog. A Sprint Backlog is usually equivalent to some subset of a Team's Backlog. (In practice, a Team might be working on or at the intersection of multiple Products.)
    • Nevertheless, for the PSPO test / "Scrum Orthodoxy", the distinction above is important.
  2. Scrum Masters don't enforce the asking of Daily Scrum questions, they facilitate Daily Scrum (even when there are no action items).
    • In practice, this was not observed by nearly any team we had Scrum Masters for.
  3. The Product Owner is primarily responsible for the Product Backlog and the Developers primarily responsible for the Sprint Backlog.
    • With the permission of the Product Owner anyone can make changes to the Product Backlog.
  4. The Product Owner is primarily responsible for identifying and facilitating Stake Holder interactions (and using any feedback gained from Sprint Reviews to improve the Product, Product Vision, Product Backlog, etc.).
  5. Sprints can only be canceled if:
    • A Product Owner determines they should be.
    • When the Sprint Goal is obsolete, loses relevance, or no longer of value to the current Increment.
  6. Sprint Goals are defined and shaped by the entire Scrum Team.
  7. All time estimates (in all Backlogs) are primarily handled by the Developers (not the Product Owner).
  8. Release depends on:
    • "The customers that will be constrained by the new release" - how does a change hamper current use?
    • "The risk that the product’s value can get out of line with the marketplace" - how relevant/current/useful is the Product Increment
    • "The costs and benefits of the upgrade" - cost/benefit calculations.
    • "Can customers actually absorb the new release?" - can and will customers use the new Features/Version?
    • Is determined by the Product Owner (and informed by the Definition of Done).
  9. Multiple Teams working on the same Product should share the same Product Backlog, Definition of Done, and Product Owner. Additionally:
    • Multiple Teams completing their Sprints at the same time can and should combine their efforts into both a single Increment and Sprint Review.
    • If multiple Teams are working on integrations, it's the responsibility of the Developers to integrate resources (not the Product Owner).
  10. Sprints start immediately following the conclusion of a preceding one.
  1. https://scrumguides.org/scrum-guide.html

Agile: Misc. Topics

Kanban

Used with Agile, Scrum to assist with Backlog and Sprint management:

  1. Used to visualize a Scrum Sprint Backlog
  2. Scrum doesn't mandate a specific way to represent or track Work items.
  3. Usually depicted in columns where Work items move from left to right through various stages of completion.
  4. The right-most column represents those items that meet the Scrum Definition of Done.

DEEP

Used with Agile, Scrum to aid with Backlog management:

  1. Detailed (Appropriately)
    • Work items should be described and clear to a reasonable extent.
    • Developers should be able to complete the task with minimal obstruction, confusion, or need for additional explanation.
  2. Estimated
    • Uses some specified way to determine relevant estimates of time or complexity for meeting the Definition of Done.
  3. Emergent
    • Issues arise organically, sourced from many Stackholders, Developers
    • Dynamic, evolves
    • Ongoingly updated
  4. Prioritized
    • Uses some specified mechanism for clearly ordering Work items by ROI w.r.t. the Product Vision or Goal.
  1. https://www.atlassian.com/agile/kanban
  2. https://www.productplan.com/glossary/deep-backlog/
  3. https://www.testgorilla.com/blog/product-owner-interview-questions/

AWS: Certified Cloud Practitioner

Some last notes I took before taking and passing the AWS Certified Cloud Practitioner Exam (February 22, 2022).

Key areas I wanted to focus on and understand better.

Databases

  1. DynamoDB - Unstructured NoSQL, auto-scales
  2. Aurora - Cloud-first MySQL and Postgres replacement, self-healing, Aurora is more performant, durable, scalable, resilient than RDS
  3. Redshift - data warehouse
  4. RDS - Managed DB, more DB’s supported than Aurora (Oracle)

AWS Billing and Cost Management Tools

By order of information:

  1. AWS Cost Explorer - query resource cost via API, visual UI, the highest level of granularity
  2. AWS Cost Reports - generates S3 reports
  3. AWS Budget - predict spending, optimize use, some forecasting
  4. AWS data migration tools

By order of max data to transfer:

  1. Snowcone - GB to TB
  2. Snowball Edge
  3. Snowmobile - TB to PB

AWS IAM distinctions

  1. Policy - an object that defines an identity’s permissions
  2. Role - groupings of policy that facilitates a specific set of responsibilities
  3. User
  4. Group

AWS Gateway differences

  1. API - Allows access to API endpoints, methods
  2. Internet - VPC to public internet, bidirectional
  3. NAT - resources in VPC to public internet, unidirectional
  4. File, Storage - optimizes multipart uploads and bandwidth for file uploading

AWS Identity Management services

  1. Cognito vs AWS SSO - Access to Apps, Services vs. Access Across AWS Accounts

Note that AWS SSO has been deprecated and replaced with AWS IAM Identity Center.

Different AWS security services

  1. AWS Inspector - Finds and identifies security vulnerabilities and security best practices, EC2
  2. AWS Trusted Advisor - AWS best practices (general), an AWS Support service
  3. AWS Security Hub - Integrates with Trusted Advisor, finds and recommends improvements to security practices
  4. AWS GuardDuty - Threat analysis on logs

App/Resource security:

  1. AWS Shield - DDOS
  2. AWS WAF - web app exploits
  3. AWS Network Firewall - inbound, outbound rules

Keys/licenses:

  1. Secrets Manager - App secrets, DB credentials
  2. KMS, CloudHMS - generates and signs cryptographic keys - ERC20, SSL, Web Server identity verification
  3. Artifact - Compliance
  4. IAM - Permissions
  5. Certificate Manager - TLS

AWS network security differences

  1. Network ACL - applies to VPC
  2. Network Security - applies to instances
  3. AWS Network Firewall - applies to networks

Response Times

  1. Business
    • < 4 hours production system impaired
    • < 1-hour production system down
  2. Enterprise
    • < 4 hours production system impaired
    • < 1-hour production system down
    • < 15-minute business critical
    • Also, only one that has Technical Account Manager
    • Concierge support
  1. https://www.udemy.com/course/aws-certified-cloud-practitioner-practice-exams-amazon/learn/quiz/4724092#overview
  2. https://www.aws.training/Certification
  3. https://www.credly.com/users/adam-gerard/badges

AWS SAA-C03: Overview

Some notes I took before taking the AWS Certified Solutions Architect - Associate Exam.

Key areas I wanted to focus on and understand better.

Conventions

I'll use the stylistic format AWS <SERVICE_NAME> to indicate an AWS Service rather than a feature of that Service.

Test Topics

Test Topics and some of their associated services.

  1. Domain 1: Design Secure Architectures - 30%
    • AWS IAM
    • AWS Control Tower
    • AWS KMS
    • AWS Cognite
    • AWS Guard Duty
    • AWS Macie
    • AWS Shield
    • AWS WAF
    • AWS Secrets Manager
    • AWS VPC
    • AWS Storage Services
  2. Domain 2: Design Resilient Architectures - 26%
    • AWS SQS
    • AWS Secrets Manager
    • AWS SNS
    • AWS Fargate
    • AWS Lambda
    • AWS API Gateway
    • AWS Transfer Gateway
    • ALB
    • AWS Route 53
  3. Domain 3: Design High-Performing Architectures - 24%
    • AWS S3
    • AWS Batch
    • AWS Athena
    • AWS Lake Formation
    • AWS Storage Gateway
    • Amazon Kinesis
    • AWS CloudFront
    • AWS DirectConnect
    • AWS VPN
    • AWS EFS
    • AWS EBS
    • AWS Elasticachae
    • AWS Data Sync
    • AWS Glue
    • AWS EMR
  4. Domain 4: Design Cost-Optimized Architectures - 20%
    • AWS Cost Explorer
    • AWS Cost Reports
    • AWS Budget

High Availability

AWS Regions and Availability Zones

Replication

Disaster Recovery

  1. The very helpful: https://www.udemy.com/course/aws-certified-solutions-architect-associate-saa-c03
  2. Also: https://portal.tutorialsdojo.com/courses/aws-certified-solutions-architect-associate-practice-exams/

AWS SAA-C03: IAM

Some finer distinctions

Kinds of IAM Policies

Policy Evaluation Logic

Evaluation Factors

Precedence

In order of precedence:

  1. An explicit Deny
  2. An Allow within a Service Control Policy
    • If not, implicitly Deny
  3. An Allow granted to a Resource and by an associated Resource-Based Policy
  4. An Allow granted to an Identity and by an associated Identity-Based Policy
    • If not, implicitly Deny
  5. An Allow granted within a Permissions Boundary
    • If not, implicitly Deny
  6. An Allow granted to a Session Principal: (a) with a Session Policy or (b) within a Role Session
    • If not, implicitly Deny

AWS Organizations

  1. An account management service that consolidates multiple AWS Accounts into a higher, top-level, organizational unit.
  2. Consolidated Billing for all associated/grouped Accounts.
  3. Global, cross-regional.

AWS Directory Services

  1. AWS Managed Microsoft Active Directory.

AWS Control Tower

  1. Simplifies and standardizes the setup and governance of AWS multi-account environments.
  2. Extends AWS Organizations.
  1. https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_evaluation-logic.html#policy-eval-basics

AWS SAA-C03: Security

General AWS security.

Tokens

AWS STS

AWS Session Token Service

  1. Provides temporary credentials for an AWS Account or IAM User.
  2. One-time use or until the token expires
    • Can be better for granting temporary permissions than setting an IAM Policy or assuming a Role

Secrets

AWS KMS

AWS Key Management Service

  1. AWS KMS manages encryption Keys.
  2. KMS Keys
    • Symmetric AES-256
    • Asymmetric RSA, ECC
    • Multi-region Keys
      • Managed independently although they can be used interchangeably.
      • Destroying one does not destroy the others.
  3. Integrates with most AWS Services.

AWS SSM

  1. Secure store for configuration and secrets.
  2. Optional AWS KMS integration.

AWS Secrets Manager

  1. Newer service for storing secrets.
  2. Can configure forced rotation of secrets every specified number of days.
  3. Using AWS KMS to encrypt secrets.

TLS and SSL

AWS Certificate Manager

  1. Manage, deploy, and provision TLS and SSL certificates.
  2. Supports both public and private certificates.
  3. Supports automatic certificate renewal.
  4. Integrates with:
    • AWS API Gateway
    • Application Load Balancers

Doesn't integrate with AWS EC2.

Firewalls

AWS WAF

  1. For protecting web apps from common web exploits (Layer 7, HTTP)
  2. Deployed on:
    • Application Load Balancers
    • AWS API Gateways
    • AWS CloudFront
    • AWS APPSync GraphQL API
    • AWS Cognito User Pool
  3. Define Web Access Control Lists (ACLs):
    • HTTP Method
    • IP Address
    • Geo-region

AWS Firewall Manager

  1. Manage rules for all AWS Accounts in an AWS Organization
  2. Common sets of security rules for:
    • AWS WAF
    • AWS Shield Advanced
    • AWS EC2 Security Groups
    • AWS Network Firewall (VPC)
    • AWS Route 53 Resolver DNS Firewall

DDoS

AWS Shield

  1. Distributed Denial of Service (DDoS) protection.
  2. AWS Shield Standard - Free.
  3. AWS Shield Advanced - $3,000/month per AWS Organization.

Automated Detection

AWS GuardDuty

  1. Intelligent thread discovery to protect AWS Accounts.
  2. Uses Machine Learning to discover anomaly, 3rd party data.
  3. Sources data from:
    • AWS CloudTrail Event Logs
    • VPC Flow Logs
    • DNS Logs
    • Kubernetes Audit Logs
  4. Can define AWS EventBridge Rules to trigger on findings.

AWS Inspector

  1. Automated security assessments for EC2 Instances, container images, Lambda Functions.

AWS Macie

  1. Machine Learning and pattern matching service to detect sensitive data in AWS.
  2. Identifies PII.

Active Directory

AWS AWS Directory Service

  1. AWS Directory Service for Microsoft Active Directory - specific to Microsoft AD
    • AWS Managed Microsoft Active Directory (AD)
    • Fully-managed by AWS
  2. Integrates with AWS IAM

Federated Services

  1. Allows multiple identity providers to be combined into a single authentication and authorization process.
  2. Allows multiple identity managment systems to be interoperable.
  3. Allows other trusted identity managmenet systems to verify the identity of a user for the others.

AWS SAA-C03: Monitoring

AWS CloudWatch

  1. Monitoring, logging, metrics, alarm

CloudWatch Alarms

  1. Associate with Log Filter Expressions, Metrics.
  2. Trigger based on certain conditions or states.
  3. Composite Alarms monitor multiple other Alarms.

CloudWatch Logs

  1. Log Groups - represents an application.
  2. Log Streams - specific containers, application instances, etc.
  3. Filter Expression - can query across Log Events and trigger Alarms.
  4. Can define Expiration Policies.

CloudWatch Metrics

  1. Use prebuilt or define customized Metrics to associate with Alarms, dashboards.
  2. Belong to CloudWatch Namespaces.
  3. Timestamped

Unified CloudWatch Agent

  1. Deployed onto a AWS EC2 Instance
  2. Used to observe customized metrics (like on Instance CPU use) and send them to AWS CloudWatch

AWS Event Bridge

  1. Schedule Cron Jobs.
  2. Or define reactive rules to respond to a service doing something.
  3. Integrates with most other AWS services.

AWS Cloud Trail

  1. Provides governance, compliance, and auditing for AWS Accounts.
  2. Trace API calls made within an AWS Account across multiple services.

Cloud Trail Events:

AWS Config

Used to assess, audit, and evaluate the configurations of AWS resources.

AWS SAA-C03: CloudFront

AWS CloudFront Price Classes

In order by included regions.

  1. Price Class All - all regions, best performance.
  2. Price Class 200 - most regions but excludes the most expensive regions.
  3. Price Class 100 - only the least expensive regions.

AWS CloudFront Features

  1. Geo-Restriction - restrict by AWS Region.
  2. Integrates with AWS WAF.
  3. Cache Invalidation - set a Time to Live (TTL) and automatically delete files from the cache you're serving from.

AWS Global Accelerator

  1. Uses the AWS internal network to route applications.
  2. Uses Edge Locations to send traffic to your app.
  3. Uses Anycast IP which is created for your app.
    • All servers hold the same IP Address.
    • A client is routed to the nearest one.

AWS Global Accelerator is usually a better option than Route 53 Geoproximity Routing for large, globally distributed, apps.

AWS SAA-C03: Networking

VPC

Virtual Private Cloud

VPC CIDRs should not use IP Addresses that overlap.

Refer to: https://stackoverflow.com/a/56834387 and IP Addresses.

And: https://docs.aws.amazon.com/vpc/latest/userguide/subnet-sizing.html

Also: https://www.rfc-editor.org/info/rfc1918

VPC Subnet

AWS reserves 5 IP (IPv4) Addresses in each Subnet. For example, given CIDR block 10.0.0.0/24:

  1. 10.0.0.0 would be reserved as the Network Address.
  2. 10.0.0.1 would be reserved for the VPC router.
  3. 10.0.0.2 would be reserved for mapping to the Amazon-provided DNS.
  4. 10.0.0.3 is reserved for future use.
  5. 10.0.0.255 - the Network Broad Address is not supported so AWS reserves this (to prevent it from being used).

VPC Peering

  1. Privately connects two VPCs using AWS' own internal network.
  2. Connected VPCs behave as if they are the same network.
  3. Overlapping CIDRs shouldn't be used in any of the connected networks.

Endpoints

  1. So-called Private Links.
  2. Allows one to connect AWS Services using a Private Network rather than over the Public Internet.
  3. Consider the scernario where an AWS Service (say AWS S3) must be connected to from within a Private VPC.
    • One would define an Private Endpoint and/or Gateway Endpoint and connect without going through the Public Internet.

Flow Logs

  1. Captures all information about network traffic:
    • VPC Flow Logs
    • Subnet Flow Logs
    • Elastic Network Interface Flow Logs
  2. Used to troubleshoot connectivity issues.

Traffic Mirroring

  1. Duplicate network traffic/requests so they can be sent to security appliances.
  2. Used to capture and inspect network traffic within a VPC.
  3. Monitor, troubleshoot, inspect connectivity, security, and traffic.

Network Security

Network and VPC-specific security.

Bastion Host

  1. SSH Bastion (Jump) Host.
  2. Configuration:
    • Bastion Host Security Group: Allow the Inbound Port 22 on a restricted CIDR (say, the public CIDR being used).
      • This allows authenticated persons to connect using SSH for further verification.
    • EC2 Instance Security Groups: Allow the Inbound Private IP of the Bastion Host (or its Security Group)
      • Allows the Bastion Host to jump to the EC2 Instances

NAT Instance

Network Address Translation

  1. Allows EC2 Instances in Private Subnets to connect to the internet.
  2. Requirements:
    • Must be launched in a Public Subnet.
    • Must disable EC2 setting: Source / destination Check.
    • Must have an Elastic IP attached to it.

Deprecated but still tested for in the exam apparently.

NACL

Network Access Control List

  1. Controls traffic from and to Subnets.

Network Firewall

  1. Protects a VPC.
  2. From Layer 3 to Layer 7 protection.

Remote Access

Site to Site VPN

  1. A fully-managed Virtual Private Network (VPN).
  2. Creates a secure connection between an on-premises VPN and an AWS VPC in the cloud.

Direct Connect

  1. Provides a dedicated private connection from a remote network to a VPC.

Gateways

Used to connect networks (and often for Remote Access scenarios).

Virtual Private Gateway

  1. Used to facilitate a Site-to-Site VPN connection.
  2. Attached to the VPC one will be connecting a VPN to.

Customer (Device) Gateway

  1. A physical device that connects a physical, remote, network to an AWS VPC in the cloud.

Transit Gateway

  1. Used to simplify complex network topologies.
  2. Cross-regional connections.
  3. Can peer Transit Gateways across AWS Regions.
  4. Examples:
    • Hub-and-Spoke (star) topology connecting 6 VPCs across 4 AWS Regions.
    • Connecting 3 VPCs (A,B,C) so that A is connected to B and B is connected to C but not A to C or vice-versa.

Internet Gateway

  1. Define Route Tables.
  2. Specifies routing for inbound and outbound traffic.

NAT Gateway

  1. Connects EC2 Instances in a Private Subnet to a Public Subnet.
  2. Deployed in a Public Subnet with Private Subet Route Tables updated to point internet-bound traffic to the NAT Gateway.
  1. https://stackoverflow.com/a/56834387
  2. https://docs.aws.amazon.com/vpc/latest/userguide/subnet-sizing.html
  3. https://www.rfc-editor.org/info/rfc1918

AWS SAA-C03: Route53

  1. A Domain Registrar
  2. Handles typical DNS attributes:
    • A - maps to IPv4
    • AAAA - maps to IPv6
    • CNAME - maps Hostname to another Hostname
    • NS - specify Name Servers for DNS resolution
  3. Handles record settings:
    • TTL
    • Routing/forwarding

Public vs Private

Routing

  1. Geolocation - route by user location
  2. Weighting
    • Controls the percentage of requests and traffic that go to a specific resource or URL
    • Assign by relative weight
  3. Failover - route to a backup location

Health Checks

AWS Route 53 Health Checks can be configured to monitor:

  1. Endpoints - are associated with AWS Data Centers.
    • AWS Route 53 will periodically ping so-configured Endpoints.
  2. Other Health Checks
    • Called a Calculated Health Check.
    • A compound, combined, or complex Health Check.
  3. Cloud Watch Alarms and the underlying Metrics that are used to configure that Alarm.
    • Will source its data from the underlying Metrics .
    • Or, from an Alarm Data Stream (used to calculate the state of the Alarm).

Furthermore:

Comparing Kinds of Health Checks

Consider an EC2 Auto-Scaling Group vs an ALB Health Check:

  1. https://www.stormit.cloud/blog/route-53-health-check/
  2. https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/health-checks-types.html
  3. https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover-determining-health-of-endpoints.html#dns-failover-determining-health-of-endpoints-cloudwatch
  4. https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/monitoring-health-checks.html

AWS SAA-C03: Messaging

  1. AWS Simple Queue Service
  2. AWS Simple Notification Service
  3. AWS Kinesis

AWS SQS

  1. Producer's send Messages to a (FIFO) Queue that Consumer's Poll
  2. Default retention: 4 Days (maximum 14 Days)
  3. Used to decouple Application Tiers
  4. SQS scales automatically
  5. An event in an SQS Queue is typically processed by one Consumer (e.g. - with Visibility Timeouts)

Queue Types

  1. SQS Standard Queue
  2. SQS Dead Letter Queue
  3. SQS Delay Queue
  4. SQS FIFO Queue

Batch

Up to 10 messages can be processed at once.

Visibility Timeout

Polling

  1. Short Polling
    • Occurs repeatedly in short time-frames
    • Queries only a subset of SQS servers
  2. Long Polling
    • Queries are made every 0-20 seconds (exclusive-inclusive)
    • Queries the entire set of SQS servers
    • Is AWS-recommended since it's less costly and more accurate

AWS SNS

  1. Event Producer's send Messages to one SNS Topic
  2. Event Receivers subscribe to an SNS Topic
  3. The SNS Topic will broacast Messages to all Receivers
    • An event is sent to and processed by all *Receivers

AWS Kinesis

Refer to Data

  1. Collect, process, and analyze streaming data in real-time
  2. IoT Telemetry
  3. Kinesis Data Streams
  4. Kinesis Data Firehose
  5. Kinesis Data Analytics
  6. Kinesis Video Streams

Partition Keys

AWS MQ

  1. Specific to MQTT

AWS SAA-C03: Data Migration

AWS Snow Family

A device is provided to submit data to AWS directly (and physically) without using one's network.

  1. Snowcone - up to Terrabytes
    • HD - 8 TB of HDD Storage
    • SSD - 14 TB of SSD Storage
  2. Snowball Edge - up to Petabytes
    • Storage Optimized - 80 TB
    • Compute Optimize - 42 TB
  3. Snomobile - up to Exabytes
    • Typically, < 100 PB
    • A physical semi-truck arrives and allows one to transfer up to 1 EB

AWS Edge Computing

  1. Snowcone
  2. Snowball Edge

AWS Transfer Family

Used for file transfers in and out of S3

  1. Supports FTP, FTPS, SFTP
  2. Managed infrastructure
  3. Integrates with Active Directory, LDAP, Okta, AWS Cognito, etc.

AWS Migration Services

  1. AWS App Migration - migrate a full application stack
  2. AWS Database Migration - migrate databases

AWS DataSync

  1. NFS, SMB, HDFS, S3
  2. On-premises to cloud requires an agent to move data to and from
  3. Syncs data using S3, EFS, and FSx

AWS Storage Gateway

  1. Connects on-premises data and cloud data.
  2. Typically used to backup data.
  3. Types
    • S3 File Gateway
      • NFS and SMB
      • Integrates with Active Directory
    • FSx File Gateway
      • AWS access for Windows File Server
      • SMB, NTFS, and Active Directory
    • Volume Gateway
      • iSCSI backed by S3
      • Backed by EBS Snapshots
    • Tape Gateway
      • For physical tape drives
      • iSCI
      • Virtual Tape Library backed by S3

AWS FSx

  1. FSx for Windows
  2. FSx for Lustre
    • High performance computing
    • Machine learning
    • Linux cluster
  3. FSx File System
    • Scratch: temporary is fast but impermanent
    • Persistent: data is persisted within the same Availability Zone
  4. FSX for NetAPP

AWS SAA-C03: Other Data Services

Other AWS tools to process, ingest, query, store, and analyze data.

SQL Based

AWS Athena

  1. Serverless service to query and analyze S3 data.
  2. Supports CSV, JSON, ORC, Avro, and Parquet.

AWS Redshift

  1. Based on PostgresSQL for Big Data analytics.
  2. Query on multiple data sources.
  3. Faster than Athena due to indexing.

AWS OpenSearch

  1. Successor to ElasticSearch.
  2. Security through Cognito, IAM, KMS encryption, TLS.

AWS EMR

Elastic MapReduce

  1. Helps to provision and configure Hadoop.
  2. Bundled with Apache Spark, HBase, Presto, Flink.
  3. Composed of up to hundreds of EC2 Instances.

AWS QuickSight

  1. Serverless machine learning, interactive dashboards.
  2. For business analytics, visualizations, business insights, ad-hoc analysis.
  3. In-memory SPICE engine for imported data.

AWS Glue Based

AWS Glue

  1. Convert data into Parquet format as part of an ETL (Extract, Transform, Load) pipeline.
  2. Converts CSV for use in Lambda Functions or AWS Athena.
  3. Catalog of datasets.

AWS Lake Formation

  1. Date Lake: a central place to store your data.
  2. Clean, transform, discover, and ingest data into your Data Lake.
    • Combine structured and unstructured data in your Data Lake.
  3. Built on AWS Glue.
    • With out-of-box blueprints for S3, RDS, Relational and NoSQL databases

AWS Kinesis Based

Refer to Messaging

Kinesis Data Streams are used to collect and process large streams of data records in real time.

Kinesis Data Firehose is used to stream data into Data Lakes, warehouses, and analytics services.

AWS Kinesis Data Analytics

  1. Real-time analytics on Kinesis Data Streams and Firehose.

AWS Managed Streaming for Kafka

AWS Managed Streaming for Apache Kafka (AWS MSK):

  1. Alternative to AWS Kinesis.
  2. MSK creats and manages Kafka Broker and Zookeeper Nodes (in earlier versions of Kafka).
  3. Data is stored in AWS EBS Volumes for indefinite periods of time.
  4. Has a serverless mode.

AWS SAA-C03: Databases

Choose:

  1. RDBMS
    • AWS RDS
    • AWS Aurora
  2. NoSQL
    • AWS DynamoDB (JSON)
    • AWS ElastiCache (Key-Value)
    • Neptune (Graph)
    • AWS DocumentDB (MongoDB)
    • AWS Keyspaces (Cassandra)
  3. Object Store
    • S3
  4. Data Warehouse
    • AWS Redshift
    • AWS Athena
    • AWS EMR
  5. Search
    • AWS OpenSearch (free text, unstructured search)
  6. Graphs
    • AWS Neptune
  7. Ledger
    • AWS Quantum Ledger Database
    • AWS Managed Blockchain
  8. Time Series
    • AWS Timestream

AWS RDS

  1. Postgres, MySQL, Oracle, MSSQL, MariaDB
  2. For Relational Databases (SQL, JOIN, Table, Column)
  3. Additional security through IAM, Security Groups, SSL
  4. Support for auto-scaling, Read Replicas, and multiple Availability Zones

High Availability

  1. Can provision DB Instances in Primary/Standby or Read Replica/Standby within the same AWS Region
    • If so configured, Standby will be promoted to the Primary DB Instance (say, of several Read Replicas).
    • If so configured, Standby will be promoted to a Read Replica if the primary Read Replica fails.
    • Provides failover support
    • Synchronous data replication
  2. DB Instances can be placed into Multi-AZ clusters.
    • Read Replicas can be placed in differing Availability Zones within the same AWS Region.
    • Read Replicas can be promoted to the Primary DB Instance.

Note that DB updates incur downtime.

RDS Proxy

  1. Allows apps to pool and share DB connections established with a database
  2. Handles failovers itself and reduces failover time by 66%
  3. Enforces IAM authentication for your databases
  4. Is never publicly accessible (must be accessed from VPC)

AWS Aurora

  1. Compatible with MySQL and Postgres
  2. Highly distributed
    • Stored in 6 replicas
    • Across 3 Availability Zones
  3. Self-healing, high availability, auto-scaling

Aurora Global Databases

AWS Aurora Global Databases are singular database instances that span multiple AWS Regions (as opposed to AWS DynamoDB Global Tables which are comprised of many replicated tables treated as one).

AWS DynamoDB

  1. Managed serverless NoSQL database
  2. Provisioned and optional auto-scaling capacity
  3. DAX cluster for read cache
  4. Automated backups up to 35 Days
  5. Event processing - DynamoDB Streams integrate with AWS Lambda or Kinesis Data Streams
  6. Highly available, multiple Availability Zones
  7. Decoupled Reads and Writes

DynamoDB Accelerator

  1. DynamoDB Accelerator (DAX) is a fully managed in-memory cache for AWS DynamoDB offering 10x performance.
  2. Deployed as a cluster.

DynamoDB Global Tables

AWS DynamoDB Global Tables are comprised of many replicated tables distributed across several AWS Regions so that they:

  1. Are treated as one sharing
  2. Share the same primary key schema

AWS ElastiCache

Caches database data using Redis or Memcached:

  1. Redis:
    • Supports Sets and Sorted Sets
    • Backup and restore features
    • Read replicas for High Availability
    • Multiple Availability Zones
  2. Memcached:
    • No High Availability
    • No backup and restore
    • Multithreaded

AWS Neptune

  1. Fully managed Graph Database
  2. Highly available across 3 Availability Zones
  3. Up to 15 read replicas

AWS Keyspaces

  1. A managed Apache Cassandra-compatible database service
  2. Tables are replicated 3 times across multiple Availability Zones
  3. Auto-scales Tables up and down based on traffic
  4. Uses Cassandra Query Language (CQL)

AWS Quantum Ledger

  1. 2-3x better performance than common ledger blockchain frameworks
  2. Can use SQL
  3. Fully managed, serverless, with high availability replication across 3 Availability Zones
  4. An immutable ledger

AWS SAA-C03: S3

  1. Replication
  2. (File) Versioning

AWS S3 Storage Classes

Note S3 Glacier has been renamed S3 Glacier Flexible Retrieval.

Pricing

AWS users pay for:

  1. Hosting data in AWS S3
  2. Updating or Copying data already in AWS S3
  3. Requests made against items hosted in AWS S3

AWS users don't pay for:

  1. There is no cost for uploading data into AWS S3 itself
    • Although one might pay for transmitting data into a VPC or across AWS Regions

AWS S3 Data Retention

  1. Glacial Vaults
  2. S3 Object Lock - Retention Mode
    • Governance mode - some special permissions can alter
    • Compliance mode - no one one can alter
  3. S3 Object Lock - Retention Period
    • Legal Hold - locked until removed
    • Retention Period - a specified period of time

AWS S3 Bucket Security Features

  1. MFA -Multi-Factor Authentication
    • Can be required for deletes
    • Used to protect resources
  2. By URL:
    • CORS - Cross-Origin Resource Sharing - restrict resource access when not on same Domain
    • Pre-Signed URLs - white list which URLs S3 GET / PUT requests can come from
  3. File Encryption - Server-Side Encryption (SSE)
    • SSE-S3 - default
    • SSE-KMS - SSE with AWS KMS
    • SSE-C - SSE with Customer Provided Keys
  4. Bucket Policies

Other Features

  1. S3 Batch Operations - use S3 Batch and the S3 API
  2. Supports multi-part uploading
  3. S3 Transfer Accerlation uses intelligent routing to reduce the time and distance it takes to upload and download files from AWS S3
  4. Versioning
  5. Supports static site hosting
  6. S3 Origins specify where AWS S3 gets content from to serve to viewers.
    • Examples:
      • An S3 Bucket
      • An HTTP server running on AWS EC2

Static Websites

Allowed URL formats:

  1. http://bucket-name.s3-website.Region.amazonaws.com
  2. http://bucket-name.s3-website-Region.amazonaws.com
  1. https://aws.amazon.com/s3/storage-classes/

AWS SAA-C03: EC2

AWS EC2 Instance Purchasing Options

Scheduled Reserved instances aren't presently offered.

Savings Plans can be used to reduce costs by making a commitment to a consistent amount of usage for 1 or 3 years.

AWS EC2 Reserved Instances

Reserved Instances - Reserved for 1 or 3 years.

Generally speaking, All Upfront payments will be lower in the long-run than No Upfront payments.

  1. All Upfront - Complete payment at the start regardless of hours eventually used
  2. Partial Upfront - Portion paid at the start with the remainder being billed at a fixed rate regardless of hours eventually used
  3. No Upfront - Billed at a fixed rate regardless of hours eventually used

Reserved Instances have a Convertible payment option:

  1. Convertible - Can be exchanged with another Convertible Reserved Instance
  2. You cannot exchange:
    • All Upfront Reserved Instances for No Upfront Convertible Reserved Instances.
    • Partial Upfront Reserved Instances for No Upfront Convertible Reserved Instances.

AWS EC2 Instance Types

Public vs. Private IP

  1. Public IP - Used on the public, global, internet/web. Not too Public IP Addresses can be the same
  2. Private IP - Used within private, subnets

Placement Groups

  1. Cluster - Same rack, Availability Zone. Fastest but most susceptible to risk factors
  2. Spread - All the EC2 Instances are deployed on different hardware, Availability Zones, etc.
    • Maximizes High Availability
    • Limited to 7 Instances per Availability Zone
  3. Partitions - Think muliple Cluster Placement Groups spread across multiple Availability Zones
    • Up to 100s of Instances per Partition
    • Up to 7 Partitions per Availability Zone

Elastic Network Interfaces

  1. Can be attached and detached to EC2 Instances within the same Availability Zone
  2. Used to assign a fixed Public or Private IP Address

AWS EC2 Hibernate

  1. Stores RAM into a persistent state on an encrypted root EBS volume
  2. Relaunching or restarting the Instance is much faster

Instance Store, EBS, and EFS

Root Volumes:

  1. Can be an Instance Store
    • Limited to 10GB
    • Ephemeral stores for use with temporary data
  2. Or a EBS Backed Root Volume
    • Limited to 1TB

Comparison:

  1. Instance Store:
    • 1-1 with an Instance
    • Has good I/O performance since they are directly attached
    • They're ephemeral however and when they're lost, the Instance is lost and when they are turned off all persisted data is lost
  2. Elastic Block Storage:
    • Attach to one Instance at a time
    • Locked at the Availability Zone (cannot be moved to another Availability Zone without a Restoration Snapshot)
    • Better for long-term storage than Instance Stores
  3. Elastic File Storage:
    • Attaches to multiple (hundreds) Instances across the same Availability Zone a time
    • Not limited to a single Availability Zone however
    • Typically more expensive
    • Networked storage

Load Balancers

AWS offers Elastic Load Balancer as a managed service. It comes in a few varieties:

  1. Application Load Balancer:
    • Layer 7 (HTTP), HTTP/2, WebSocket, HTTPS
    • Application balancing
    • Routing based on URL Path, Hostname, Query, Headers
    • Routes to EC2 Instances, ECS Tasks, Lambda Functions, IP Address
  2. Network Load Balancer:
    • Layer 4 (TCP), TCP/UDP forwarding
    • Extreme performance
    • Routes based on IP Address not specific AWS service
  3. Gateway Load Balancer:
    • Layer 3 (Network), IP packets
    • Uses Route Tables to route traffic for an entire VPC
    • Primary use is to be the single place for all inbound traffic: firewall, security monitoring, packet analysis, etc.

Classic Load Balancers are being deprecated at the end of 2022.

The Sticky Session feature ensures that users are only connected to the same EC2 Instance (and the same application session, context).

Cross Zone Load Balancing:

  1. Load balancing is split between all Instances across all Availability Zones
  2. Otherwise each Instance in an Availability Zone will divide the assigned load balancing weight (for that Availability Zone) by the total number of Instances within that single Availability Zone

Auto-Scaling Groups

  1. EC2 Instances can be combined into Auto-Scaling Groups
    • EC2 Auto Scaling Launch Templates
    • EC2 Auto Scaling Launch Configuration
    • As a general rule of thumb: EC2 Auto Scaling Launch Templates > Launch Configurations
      1. They (Templates) provide moer configuration features
      2. They support multiple versions
      3. Templates are AWS-recommended
      4. The EC2 Instance with the oldest Launch Configuration is terminated first
  2. They create new Instances and terminate them based on configureable triggers and Dynamic Scaling Policies
    • For example: CloudWatch alarms
  3. Auto-Scaling Group Minimum and Maximum Capacities apply to the total number of EC2 Instances across all Availability Zones

Lifecycle

  1. Scaling Out takes precedence to Scaling In
    • New EC2 Instances are launched before new ones are removed
  2. Scaling In terminates EC2 Instances
    • By default, the EC2 Instance with the oldest Launch Configuration is terminated first
  1. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ri-convertible-exchange.html
  2. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-purchasing-options.html
  3. https://jayendrapatil.com/aws-auto-scaling-lifecycle/

AWS SAA-C03: Containers

AWS Elastic Container Service

AWS ECS automatically increases and decreases the number of ECS Tasks.

ECS Launch Types:

  1. EC2
    • Launch Docker containers on AWS.
    • User provisions and mainttains the infrastructure (underlying EC2 Instances).
  2. Fargate
    • User just creates the Task Definitions.
    • Serverless computing.
    • User doesn't manage the underlying EC2 Instances.

IAM Roles:

  1. EC2 Instance Profile
    • EC2 Launch Type only.
  2. ECS Task Role
    • Assigned to a Task.

AWS Elastic Container Registry

  1. Store and manage Docker images on AWS.
  2. Private and Public repository.
  3. Backed by S3.
  4. Access via IAM permission.

AWS Elastic Kubernetes Service

  1. Supports ECS.
  2. Managed Node Groups.
  3. Self-Managed Nodes.
  4. AWS native solution for Kubernetes.

AWS AppRunner

  1. Fully managed app service.
  2. Builds and deploys apps.

AWS SAA-C03: Serverless

Serverless Computing is a paradigm where infrastructure is sold as a service (IAAS) in a fully managed way (abstracting away the underlying bare metal and operating system resources).

AWS Fargate

  1. The user creates Task Definitions but AWS manages the rest of the ECS infrastructure
  2. Limited to:
    • 100 Tasks per Region per Account (default)
    • 1,000 Tasks per Service
    • By Amazon ECS Service Quotas (limits)

Consult the Elastic Container Service article.

https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-quotas.html

AWS Lambda

  1. A user creates Lambda Functions but doesn't have to manage the underlying infrastructure to execute them.
  2. Lambda Functions are associated with specific Endpoints in AWS API Gateway and are invoked using standard HTTP REST methods and URL context paths.
  3. Lambda Functions are stateless.
  4. Indeed, they are ideal for stateless
  5. There's a small delay when a Lambda Function is first called.
    • A Lambda Function Context is created from a Cold state (the underlying resources are initialized and made available).
    • However, a Lambda Function Contexts exists for 15 minutes in a Hot state.
    • So, sequential calls will execute without the initial delay.
  6. Lambda Functions will timeout after 15 minutes.
  7. The default maximum number of simultaneous concurrent connections for a single Lambda Function is 1000 within the same AWS Region (this can be increased by request).

IAM Policies

  1. Execution Roles - grant a Lambda Function permission to access other resources or perform certain operations.
  2. Resource-Based Policy - how a Lamba Function itself can be used, invoked, or called by users or other services.

AWS API Gateway

  1. Connect AWS Lambda Functions to API Gateway Endpoints.
  2. Associate each endpoint with HTTP methods (PUT, POST, GET, DELETE, PATCH, OPTIONS).
  3. Can define HTTP Request and Response Schemas.

AWS Step Functions

  1. For sequential or "chained" operations that might require a lengthy or significant amount of execution time.

Serverless Stack

A commonly found and fully Serverless stack will comprise:

  1. AWS DynamoDB - fully managed serverless DB.
  2. AWS DynamoDB DAX - for Caching and read acceleration
  3. AWS Lambda
  4. AWS Cognito - for identity management and user authentication

AWS Proton

AWS Proton standardizes serveless architecture deployments.

  1. https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-quotas.html
  2. https://docs.aws.amazon.com/lambda/latest/dg/lambda-concurrency.html

AWS SAA-C03: Machine Learning

Image Recognition

AWS Rekognition

  1. Facial analysis and search using Machine Learning (ML) for user verification.
  2. Find objects, people, text, and images in photos or video.

Speech and Text

AWS Transcribe

  1. Automatically convert speech to text.
  2. Deep Learning (Automatic Speech Recognition - ASR)

AWS Polly

  1. Convert text into speech.

AWS Translate

  1. Language translation.

AWS Lex + Connect

  1. Automatic Speech Recognition (ASR) to convert speech into text.
  2. Natural Language Understanding to recognize the intent of text, callers.
  3. For chatbots, call center bots.
  4. Receive calls, create contact flows, cloud-based virtual contact center.

AWS Comprehend

  1. Natural Language Processing (NLP) to find insights and relationships in text.
  2. Fully managed, serverless.
  3. Specialized service for unstructured medical/clinical text (HIPAA).

AWS Textract

  1. Extract text, handwriting, and data from any scanned documents.
  2. Extrac data from forms and tables.

Fully Managed Services

AWS SageMaker

  1. Fully managed service for developers to build Machine Learning models.

AWS Forecast

  1. Fully managed service for developers to build highly accurate forecasts.

AWS Kendra

  1. Fully managed document search service.
  2. Extracts answers from within a document.
  3. Natural language search capabilities.

AWS Personalize

  1. Fully managed service for making real-time personalized recommendations.

AWS DOP-C02: Overview

Notes I took before taking the AWS Certified DevOps Engineer - Professional (DOP-C02) Exam.

Test Topics

https://d1.awsstatic.com/training-and-certification/docs-devops-pro/AWS-Certified-DevOps-Engineer-Professional_Exam-Guide.pdf

AWS DOP-C02: Domain 1

SDLC Automation

Task Statement 1.1: Implement CI/CD pipelines:

Example CI/CD: GitHub -> Elastic Beanstalk -> CodePipeline

  1. Node.js application Source Code is checked into a GitHub Repository.
  2. The GitHub Repository and target Branch are associated with an AWS Elastic Beanstalk Environment and AWS Elastic Beanstalk Application.
  3. An AWS CodePipeline Pipeline is defined to Trigger a Build and the Deploy all changes into the target AWS Elastic Beanstalk Environment on any Commit.

Example CI/CD: Docker -> ECR -> ECS

  1. Docker Image(s) is built locally.
  2. Docker Image(s) is pushed to an AWS ECR Repository.
  3. AWS ECS Fargate - Task Definition points to URL of the AWS ECR Repository to retrieve Docker Image.
  4. Application Load Balancers, private Subnets, and/or public Subnets are defined and selected for deployment into.
  5. The AWS ECS Fargate - Task Definition gets deployed as a Managed, Serverless, AWS ECS Task.

Example CI/CD: GitHub -> S3 -> CodeBuild -> Lambda

  1. Python Lambda Handler is checked into a GitHub Repository with a buildspec.yml.
  2. The GitHub Repository is associated with AWS CodeBuild.
  3. The GitHub Repository is zipped and saved as an Artifact on AWS S3.
  4. An AWS Lambda Function is defined and associated with the AWS S3 Artifact.
  5. The AWS Lambda Function is deployed and made accessible through an AWS API Gateway.

Code samples:

  1. https://github.com/Thoughtscript/aws_dop_c02/tree/main/domain_one_ecs
  2. https://github.com/Thoughtscript/aws_lambda_example_2024
  3. https://github.com/Thoughtscript/aws_dop_c02/blob/main/domain_one_cbl

AWS DOP-C02: Domain 2

Configuration Management and IaC

Code samples:

  1. https://github.com/Thoughtscript/aws_dop_c02/tree/main/domain_two

AWS DOP-C02: Domain 3

Resilient Cloud Solutions

AWS DOP-C02: Domain 4

Monitoring and Logging

Code samples:

  1. https://github.com/Thoughtscript/aws_dop_c02/tree/main/domain_four

AWS DOP-C02: Domain 5

Incident and Event Response

AWS DOP-C02: Domain 6

Security and Compliance

COMPTIA SY0-701: Overview

The COMPTIA Security+ SY0-701 exam divides into five general security topics:

  1. General Security Concepts
  2. Threats, Vulnerabilities, and Mitigations
  3. Security Architecture
  4. Security Operations
  5. Security Program Management and Oversight

Summarizing and clarifying certain topics. Most is stuff I already know.

  1. https://www.comptia.org/certifications/security#objectivesform
  2. https://www.comptia.org/faq/security/what-is-on-the-comptia-security-exam

COMPTIA SY0-701: General Security Concepts

Security Controls

Compare and contrast various types of security controls:

Security Concepts

Summarize fundamental security concepts:

Change Management

Explain the importance of change management processes and the impact to security:

Cryptographic Solutions

Explain the importance of using appropriate cryptographic solutions:

  1. https://csrc.nist.gov/glossary/term/security_controls
  2. https://csrc.nist.gov/glossary/term/operational_controls
  3. Committee on National Security Systems (CNSS 4009) Glossary
  4. https://konghq.com/learning-center/cloud-connectivity/control-plane-vs-data-plane

COMPTIA SY0-701: Threats, Vulnerabilities, and Mitigations

Threat Actors

Compare and contrast common threat actors and motivations:

Attack Surfaces

Explain common threat vectors and attack surfaces:

Vulnerabilities

Explain various types of vulnerabilities:

Indicators of Malicious Activity

Given a scenario, analyze indicators of malicious activity:

Mitigation Techniques

Explain the purpose of mitigation techniques used to secure the enterprise:

  1. https://bluecatnetworks.com/blog/four-major-dns-attack-types-and-how-to-mitigate-them/
  2. https://research.nccgroup.com/wp-content/uploads/2021/09/TOCTOU_whitepaper.pdf
  3. https://www.cloudflare.com/learning/ddos/dns-amplification-ddos-attack/

COMPTIA SY0-701: Security Architecture

Architecture Models

Compare and contrast security implications of different architecture models:

Security Principles

Given a scenario, apply security principles to secure enterprise infrastructure:

Data Protection Concepts

Compare and contrast concepts and strategies to protect data:

Resilience and Recovery

Explain the importance of resilience and recovery in security architecture:

COMPTIA SY0-701: Security Operations

Common Security Techniques

Given a scenario, apply common security techniques to computing resources:

Security Implications of Proper Asset Management

Explain the security implications of proper hardware, software, and data asset management:

Vulnerability Management

Explain various activities associated with vulnerability management:

Monitoring Concepts and Tools

Explain security alerting and monitoring concepts and tools:

Modify Security

Given a scenario, modify enterprise capabilities to enhance security:

Identity and Access Management

Given a scenario, implement and maintain identity and access management:

Automation and Orchestration

Explain the importance of automation and orchestration related to secure operations:

Incident Response

Explain appropriate incident response activities:

Use Data Sources

Given a scenario, use data sources to support an investigation:

  1. https://www.udemy.com/course/comptia-security-sy0-701-practice-exams-2nd-edition
  2. https://www.cloudflare.com/learning/email-security/dmarc-dkim-spf/
  3. https://www.courier.com/guides/dmarc-vs-spf-vs-dkim/
  4. https://www.strongdm.com/blog/saml-vs-oauth
  5. https://jwt.io/

COMPTIA SY0-701: Security Program Management and Oversight

Security Governance

Summarize elements of effective security governance:

Risk Management Process

Explain elements of the risk management process:

Third-Parties

Explain the processes associated with third-party risk assessment and management:

Effective Security Compliance

Summarize elements of effective security compliance:

Audits and Assessments

Explain types and purposes of audits and assessments:

Security Awareness Practices

Given a scenario, implement security awareness practices:

  1. https://www.druva.com/blog/understanding-rpo-and-rto
  2. https://corpslakes.erdc.dren.mil/partners/moumoa.cfm
  3. https://www.pandadoc.com/blog/master-services-agreement-vs-statement-of-work/

COMPTIA SY0-701: Miscellaneous Concepts

Windows Security

Security Systems

Kinds of Phishing

Regulatory Designations

Elliptic Curve Cryptography

RAID

RAID Configurations:

Bluetooth

Important Acronyms

Wireless Security Protocols

Key ISO Standards

  1. https://www.cisco.com/c/en/us/support/docs/security-vpn/remote-authentication-dial-user-service-radius/13838-10.html
  2. https://www.udemy.com/course/securityplus/learn/quiz/6090708#overview