arduino stm32 stm32f103 windows burn and flash bootloader

    • download python for windows (when serial support for WSL (bash) works this should all work within bash) pip install pyserial

    • launch power shell

    git clone

    grab this (assuming the led is on pc13) somehow like via curl “

    connect your FTDI and move Boot0 to 1

    Note that the -e option was needed python.exe .\ -e -p COM4 -w .\generic_boot20_pc13.bin -v -V

    Press reset, shoudl flash fast, then slow then go off.

    %{description: "get stm32 usb bootloader working", title: "arduino stm32 stm32f103 windows burn and flash bootloader"}

  • (Optional) See our JavaScript configuration variables documentation for ways to further customize the Disqus embed code.

  • ellie decoder

    quick ellie setup for playing with json decoders


    %{description: "for decoder play", title: "ellie decoder"}

  • (Optional) See our JavaScript configuration variables documentation for ways to further customize the Disqus embed code.

  • graph database research

    Web UI

    A few examples that are interesting for visualization

    a dagre graph layout with mini map and neibor highlighting

    list of graph visualization libs

    popto.js seemed very interesting. the filtering may be usefule to model on, however it seems very tied to neo4j which has licensing issues.

    wine and cheese is a very good demo of what cytoscape.js is capable of. Planning to look at lunr.js and text search provided in this example


    Generally while pacer is mostly a joy to use to manipulate graphs and prototype things it has not had any updates in about a year and I fear it may be dead.

    Gremlin looks ok on the surface but in practice using the groovy repl seems extra painful and the syntax is a bit obtuse.

    Tinkergraph is fast when using indexes and pacer, however it does not support transactions which makes some multithreading tasks very hard. if you know the id or can infer it you can make pacer go quite a lot faster. using pacer’s bulk_job also helps with thread safety a bit.

    cytoscape has a java application and a javascript library. Both assume you know enough about your graph to filter it before you view it. I needed something that lets me explore a graph without loading the whole thing. I decided to use the cytoscape.js library for this.

    linkurious.js has a very nice interface, but it is a “service” and they want money :(


    Cayley seems to have created lots of buzz but it all seems very golang specific. The server fell over quite quickly when trying to use httppoison and elixir spawned processes to load data. Seems that loading requires me to drop into go, or to write out nquads.

    nquads are also hard to conceptualize alongside a property graph. There may be some great perf benefits but the overhead to understnad them preveneted me from digging in.

    the UI provided by cayley is oddly d3 for query shape and sigma for results. It has limited interactivity and control over style.

    I found this helpful: []

    [basic pacer cytoscape.js browser](

    Retiring a bunch of tabs regarding graph databases

    pacer table of what returns an enum

    pacer rubydocs for subgraph

    %{description: "cayley graph, pacer etc", title: "graph database research"}

  • (Optional) See our JavaScript configuration variables documentation for ways to further customize the Disqus embed code.

  • elm examples

    encoding pairs (String,String)

    import Html exposing (text,div,br)
    import Json.Encode
    import Markdown
    type alias R = List(String,String)
    t = ("a","b")
    r = [t,t]
    s = "foo"
    enc = encPair t
    encPair : (String,String) -> Json.Encode.Value
    encPair r = 
        (a,b) = r
        Json.Encode.object [(a, Json.Encode.string b)]
    encLst : R -> Json.Encode.Value
    encLst lst = (\x -> encPair x) lst |> Json.Encode.list
    jMark : String -> String
    jMark s = 
      "\n\n```" ++ s ++ "```"
    main =
      div [] [
        text <| "Json Encoding" 
        , Markdown.toHtml [] (jMark (Json.Encode.encode 2 enc))
        , br [] []
        , Markdown.toHtml [] (jMark (Json.Encode.encode 2 (encLst r)))


    import Html exposing (text, p)
    import Html.Attributes exposing (..)
    main =
      p [ contenteditable True ][ text "Hello, World! click to edit" ]

    mock child component using http example

    import Html exposing (..)
    import Html.App as App
    import Html.Attributes exposing (..)
    import Html.Events exposing (..)
    import Http
    import Json.Decode as Json
    import Task
    main =
        { init = init "cats"
        , view = view
        , update = update
        , subscriptions = subscriptions
    -- Parent Stuff
    -- MODEL
    type alias Model =
      { topic : String
      , gifUrl : String
      , child : ChildModel
    init : String -> (Model, Cmd Msg)
    init topic =
        childMsg = childGet
      ( Model topic "waiting.gif" child_init
      , Cmd.batch [getRandomGif topic, ChildProxy childGet]
    -- UPDATE
    type Msg
      = MorePlease
      | FetchSucceed String
      | FetchFail Http.Error
      | ChildGet
      | ChildProxy ChildMsg
    update : Msg -> Model -> (Model, Cmd Msg)
    update msg model =
      case msg of
        MorePlease ->
          (model, getRandomGif model.topic)
        FetchSucceed newUrl ->
          ({model | gifUrl = newUrl}, Cmd.none)
        FetchFail _ ->
          (model, Cmd.none)
        ChildGet -> 
            cmd = childGet
            mapped_cmd = (\x -> ChildProxy x) cmd
            (model, mapped_cmd)
        ChildProxy proxy_msg -> 
           (child_model, child_msg) = child_update proxy_msg model.child
           ({model | child = child_model}, Cmd.none)
    -- VIEW
    view : Model -> Html Msg
    view model =
      child_stuff = (\x -> ChildProxy x) (child_view model.child)
      --child_stuff = (\x -> x) (child_view model.child)
      div []
        [ h2 [] [text model.topic]
        , button [ onClick MorePlease ] [ text "More Cats!" ]
        , button [ onClick ChildGet ] [ text "More Coffee!" ]
        , br [] []
        , img [src model.gifUrl] []
        , child_stuff
    subscriptions : Model -> Sub Msg
    subscriptions model =
    -- HTTP
    getRandomGif : String -> Cmd Msg
    getRandomGif topic =
        url =
          "" ++ topic
        Task.perform FetchFail FetchSucceed (Http.get decodeGifUrl url)
    -- Child Stuff
    type alias ChildModel = {txt : String, gifUrl : String}
    child_init : ChildModel
    child_init = {txt = "init", gifUrl = "waiting.gif"}
    type ChildMsg = 
      | Win String
      | Fail Http.Error
    child_update : ChildMsg -> ChildModel -> (ChildModel,Cmd Msg)
    child_update msg model = 
      case msg of 
        Go ->
            --cmd = Task...
            x = 1
            --(model, cmd)
            (model, Cmd.none)
        Win newUrl ->
          ({model | txt = "won", gifUrl = newUrl}, Cmd.none)
        Fail _ -> 
          ({model | txt = "did not win"}, Cmd.none)
    child_view : ChildModel -> Html ChildMsg
    child_view model = 
      div [] [
        text <| "Child: " ++ model.txt
        , hr [] []
        , text <| "Url: " ++ model.gifUrl
        , br [] []
        , img [src model.gifUrl] []
    childGet : Cmd ChildMsg
    childGet = 
      Task.perform Fail Win (Http.get decodeGifUrl "")
    -- Shared Stuff
    decodeGifUrl : Json.Decoder String
    decodeGifUrl = ["data", "image_url"] Json.string

    json decoding

    import Html exposing (text,div,hr)
    import Json.Decode
    --|  this is our test json string.  it uses """ syntax for multi line strings
    jsonString = 
       { "key" : "bar" } 
    --| This is for our json object type alias
    type alias LstItem = {key: String}
    --| This alias defines a simple list of json objects [{"key":"value"}]
    type alias Lst = List LstItem
    --| this defines a decoder for the Json objects in our list.
    decoder = Json.Decode.object1 LstItem ( ["key"] Json.Decode.string)
    --| This defines a decoder function to process a list
    decodeLst : Json.Decode.Decoder Lst
    decodeLst = Json.Decode.list decoder
    --|  this function actually runs the decoder and produces a Result
    doDecodeLst : Json.Decode.Decoder a -> String -> Result String a
    doDecodeLst decodeLst raw_string = 
      Json.Decode.decodeString decodeLst raw_string
    --|  this one runs the decoder and processes the Result into an empty list or 
    --|  the decoded list and disregards any errors
    maybeDecodeLst : Json.Decode.Decoder (List LstItem) -> String -> List LstItem
    maybeDecodeLst decodeLst raw_string = 
      case Json.Decode.decodeString decodeLst raw_string of
        Err str -> []
        Ok lst -> lst
    --| the main program, in this case a view.
    main =
      div [] [
        text "decoding json"
        , text "the first version creates a result: "
        , text <| toString (doDecodeLst decodeLst jsonString)
        , hr [] []
        , text "the second version creates a record: "
        , text <| toString (maybeDecodeLst decodeLst jsonString)
        , hr [] []
        , text "the third version creates an error by using the wrong decoder: "
        , text <| toString (doDecodeLst decoder jsonString)

    frustrated with the difficulty to update nested data I was testing a few ways to provide something simular to elixir's put_in function

    - recursion and unnesting
    import Html exposing (text,div,hr)
    import Dict
    import String
    main =
        unnested = unnest p Dict.empty ""
        path = [":parent","child1"]
        pathString = String.join ":" path
        found = Dict.get pathString unnested 
        updated = if found == Nothing then
       div [] [
        text "Recurse! ->"
        , text (toString p)
        , hr [] []
        , text "Unnest! ->"
        , text (toString unnested)
        , hr [] []
        , text <| " path string: " ++ pathString ++ " ... "
        , text (toString found)
        , hr [] []
        , text <| " updated: " ++ (toString updated)
    --unnest : Response -> String
    unnest : Response -> Dict.Dict String Response -> String -> Dict.Dict String Response
    unnest comment dict path = 
      case comment of
        NoChild rec -> dict
        ParentResponse rec -> 
            newPath = path ++ ":" ++
            newDict = Dict.insert newPath comment dict
            com = rec.response
            unnest com newDict newPath
    nest : Dict.Dict String Response -> Response 
    nest dict = 
    type Response 
      = ParentResponse {
        name : String
        , b : Bool
        , s : String
        , response : Response
      | NoChild {
        name : String
        , b : Bool
        , s : String
        , response : Bool
    p : Response
    p = 
      ParentResponse {
      name = "parent"
      , b = False
      , s = ""
      , response = c1
    c1 : Response
    c1 = 
      ParentResponse {
      name = "child1"
      , b = False
      , s = ""
      , response = c2
    c2 : Response
    c2 = 
      ParentResponse {
      name = "child2"
      , b = False
      , s = ""
      , response = c3
    c3 : Response
    c3 = 
      ParentResponse {
      name = "child3"
      , b = False
      , s = ""
      , response = c4
    c4 : Response
    c4 = 
      NoChild {
      name = "child"
      , b = False
      , s = ""
      , response = True


    import Html exposing (text,div,hr)
    import Dict
    main =
      div [] [
        text ("Hello, World!" ++ (toString record) ++ (toString dict))
        , hr [] []
        --, text (toString (recIt record))
        , text (toString (recIt record))
        , hr [] []
        , text "Tuples: "
        , text (toString tpls)
        , hr [] []
        , text "put_in: "
        --, text (toString (put_in tpls))
        , text (toString (put_in list3d ["name", "name2"]))
        , text "todo: you were going to try to pattern match out a match on your update of the nested path"
    record = 
      foo = ("String","foo")
      ,bar = {t = "Int",v = "1"}
    record2 = 
      nofoo = {t = "Bool", v = "True"}
    -- name, type, value
    tpls = 
        ( "foo", "Bool", "True")
        --, ( "bar", "List", 
        --    ("baz", "Bool", "False")
        --  )
    niltpl = ("foo", "String", ("","","string var"))
    nesttpl = ("bar", "Nest", ("baz", "String", " nested var"))
    moretpls = 
        , nesttpl
    type alias Item = (String, String, String) 
    type alias ItemList = (String, String, Item)
    type alias SuperItem = (String, String, MaybeString)
    --type Thing = Item | ItemList
    type MaybeString = String | Item
    type alias ThreeDeep = 
      ( String, String, 
          ( String, String,
              ( String, String, String)
    threedeep = 
       ("name", "type",
         ("name2", "type",
           ("", "value depth2", "")
    threedeep2 = 
       ("name", "type",
         ("name2", "type",
           ("name3", "type", "value depth 3")
    threedeep3 = 
       ("name", "type",
         ("", "value depth 1",
           ("", "", "")
    list3d = [ threedeep, threedeep2, threedeep3 ]
    put_in : List ThreeDeep -> List String -> List String
    put_in lst path = 
        -- map list to find first path map
        x = 1
        -- return something
    --put_in : List SuperItem -> List String -> List String
    --put_in tpl names = 
    --recFun : 
    dict = Dict.fromList 
      ("key", { name = "foo"})
      , ("key2", { name = "1"})
      , ("key", { name = "same key"})
    type Splay = A | B
    splay : Splay -> String
    splay input =
      case input of
        A -> "out"
        B -> "out2"
    recIt record = 
      case record of
        {foo} -> {hasFoo = True}
        --{foo, bar} -> {hasFoo = False}

    %{description: "a few examples from learning elm", title: "elm examples"}

  • (Optional) See our JavaScript configuration variables documentation for ways to further customize the Disqus embed code.

  • AWS CLI whoami or how to troubleshoot aws cli permission issues

    please ignore my code rage’ed profile name

    [elixir1.2@stink rel]$ aws iam get-user --profile fuckyou
    A client error (AccessDenied) occurred when calling the GetUser operation: User: arn:aws:iam::[REDACTED]:user/[REDACTED] is not authorized to perform: iam:GetUser on resource: arn:aws:iam::[REDACTED]:user/[REDACTED]

    I recently made a terrible assumption that source ~/.bashrc would actally clean out old env varriables. This lead to 20 minutes of going down the wrong path to try to troubleshoot why my aws cli was getting a permission denied. Lots of time could have been solved via the magical whoami command

    the above command essentially uses the calling api to fetch the arn and tries to get info about it.

    %{description: "aws cli woes", title: "AWS CLI whoami or how to troubleshoot aws cli permission issues"}

  • (Optional) See our JavaScript configuration variables documentation for ways to further customize the Disqus embed code.

  • updated reflux react and phoenix

    I found this post very helpful in getting phoenix 1.2 working with react:

    I was unable to get bower working, and followed the above blogs instruction wiht some success. Hope i get time to do an update.

    I added the following to get reflux working. I have to include everythign everywhere, not sure why.

    npm install --save reflux bootstrap

    %{description: "update on getting react working", title: "updated reflux react and phoenix"}

  • (Optional) See our JavaScript configuration variables documentation for ways to further customize the Disqus embed code.

  • asmedia 1061 port multiplication fis controller hack

    I bought this esata device: Mediasonic 4 Bay Dock

    I was dissapointed to find my machine completely puked when i hooked it up. After hours of research I discovered a bunch of posts saying it was not possible so I gave up. In the search of a card that would provide this function I found this card which seemed cheap and ready to work.

    Mediasonic ProBox HP1-SS3 2 Port External SATA 3 / III 6.0 Gbps PCI Express Card - Port Multiplier / FIS-Based switch

    Reading the reviews I saw a trend of people pissed off about going off to have to download the driver. The driver in the page here was called asmedia106x… so i figured if it worked for their card, I might as well try it since I had a asmedia 1061 controller in my motherboard. After a reboot or two it was working like a charm!

    Moral of the story: esata connectivity sucks for older controllers, but it apparently is all in the driver.

    %{description: "gettin port multiplication working on H77 Pro4-M estat port", title: "asmedia 1061 port multiplication fis controller hack"}

  • (Optional) See our JavaScript configuration variables documentation for ways to further customize the Disqus embed code.

  • Wiring reflux and react to phoenix websockets

    Phoenix Sockets and react.js components via reflux.js

    In this post I will describe how to wire up some server state to our react components


    This quick tutorial has only been tested with the following component versions. Beware trying to get this working with other versions. Phoenix is not yet 1.0 and it may change enough to make this no longer work.

    Git Repo: []

    Elixir 1.05 and Phoenix 0.16.1 components:

    Bower components for reflux 0.2.7, react 0.13.3


    The first post described how to use react.js and reflux to coordinate react components. This post will describe how to wire in server state through phoenix sockets.

    The great thing about this approach is that you largely remove the drudgery of creating a REST CRUD api, and just send json down to your javascript as events. The dataflow is in one direction, which simplifies our design.

    To demonstrate this we will be adding a few key components. The first will be an Elixir Agent to hold our state. The second will be to integrate our Reflux store with Phoenix sockets. We will track clicks to our buttons, and who’s using the system.

    reflow shows the basic flow of data.

    Let's look at how to hold our server state.

    An elixir agent is essentially a server process that will hold state in memory. It can leverage supervision, and be run as an application. The Elixir docs for Agent can be found here. Our agent will provide functions to:

    • Register a user login
    • Register a user logout
    • Register a user click
    • Retrieve the current user_count
    • Retrieve the current state
    • Start and Stop

    I also subbed out broadcasting and messages, which we will ignore for now.


    setup state: def start_link

    our start_link function initializes our state. In this case it is simply a map with default attributes: an empty list for users, a zero user count, an empty list for messages, and a zero count for hits.

    We simply pass a function which returns the map, and a name, which gets compiled to the module name Elixir.LogAgent or in Elixir just LogAgent.

    def start_link do
        map = %{users: [],user_count: 0, msgs: [],hits: 0} 
        Agent.start_link(fn -> map end, name: __MODULE__)

    get state: def get

    Def simply calls Agent.get with our module name, and it uses function shorthand, or partial application to return the state.

    def get_user_count do

    We could also write this more verbose

    def get do
        Agent.get(__MODULE__,fn(state) -> 
          # last arg is returned

    register a login: def login(user)

    This grabs a user and puts it into our state map. One flaw to be fixed here is the fact that we don’t check to see if the user is already logged in. I

    If you are unfamiliar with Elixir or Erlang, the syntax for adding a user to our list may be confusing. This is called a “cons cell”, and it allows you to reference a list as a head and a tail. When used on the left side of “=” it interpolates the first element of a list into the variable on the left of the “|”, and the rest of the list to the right.

    [head|tail] = [1,2,3,4,5]

    head is now 1, and tail is now [2,3,4,5]. this is because “=” is not an assignment operator like most languages, but a pattern match.

    When used on the right side of “=”, or bare you prepend head to your list.

    # bare
    iex(7)> element = 1
    iex(8)> list = [2,3,4]
    [2, 3, 4]
    iex(9)> [element|list]
    [1, 2, 3, 4]
    # right side of "="
    iex(10)> list = [1|[1,2,3]]
    [1, 1, 2, 3]

    Back to our Agent…

    def login(user) do
        # get the current state
        s = get()
        # increment our user counter
        new_state = Map.put(s,:user_count,s.user_count + 1)
        IO.puts inspect new_state
        # add our user to our users list
        new_state = Map.put(new_state,:users,[user|new_state.users])
        IO.puts inspect new_state
        # store the update
        Agent.update(__MODULE__,fn state -> new_state end)
        # stub to broadcast a change event

    The rest of the agent is pretty straight forward if you understand the partial application syntax.

    Setup phoenix channels and sockets

    Step one here is to look at our endpoint, and ensure we have our sockets mapped correctly.


    defmodule RefluxEventbrokerReactPhoenixElixir.Endpoint do
      use Phoenix.Endpoint, otp_app: :reflux_eventbroker_react_phoenix_elixir
      # commenting this out caused me all kinds of problems.  Seems to be some leftover assumptions this exists.
      socket "/socket", RefluxEventbrokerReactPhoenixElixir.UserSocket
      # this plumbs our socket path to our Socket functions in web/channels/pub_chat_socket.ex
      socket "/status",Reflux.PubChatSocket
    # SNIP


    Phoenix web sockets break things into sockets and channels. Sockets allow you to manage connections and authenticate a particular websocket path. They also allow you to manage the transport.

    defmodule Reflux.PubChatSocket do
      require Logger
      use Phoenix.Socket
      # Defines our channel name, and what Elixir module will be used to control it, PubChannel in this case
      channel "all",PubChannel 
      # Defines the transport, and if we need to check the host origin.  Check origin is useful if you want to limit access to your sockets to certain hosts
      transport :websocket, Phoenix.Transports.WebSocket, check_origin: false
      # connect parses our connection parameters from our client.  using phoenix.js this is socket.connect(params);
      # we also use Phoenix.Socket.assign/3 to embed our user and pass into the socket struct, which gets passed along to out channel.
      def connect(params, socket) do "PARAMS: \n " <> inspect params
        socket = assign(socket, :user, params["user"])
        socket = assign(socket, :pass, params["pass"])
      # id allows us to broadcast to all users with a particular id.  I'm not using this in this revision.
      def id(socket) do"id called" <> inspect socket, pretty: true)

    So now we have our channel “all” mapped to our channel logic.


    • join/3 : manages client join requests
    • handle_info/2 : manages our state update broadcasts
    • handle_in/3 : manages any messages sent to the channel after join has completed successfully
    • terminate/2 : manages when a websocket connection is no longer active

    Channels use the behaviour pattern. Behaviours allow us structure and composition. They are most heavily used in OTP patterns like GenServer. Behaviours generally lean heavily on pattern matching in function definition, which is worth of discussion for folks new to Elixir.

    Take the following definitions

    defmodule Foo do
        def bar(:atom) do
            "got an atom"
        def bar({a,b}) do
            "got a 2 tuple with variables a and b assigned the arg's tuple values"
        def bar(%{foo: foo} = arg) do
            "got a map with a key of :foo, interpolated into the variable 'foo', and the full map assigned to 'arg'"
        def bar(%{"foo" => foo} = arg) do
            "foo key was a binary"
        def bar(any) do

    Elixir will take any call to and try to see if the argument fits a definition. This works top to bottom. The last case #5 will match any call to Having a catch all can be useful in debugging to detect and crash when you have unexpected input. Example #1 will only match for Example #2 will only match a 2 element tuple. Example #3 is much more interesting and powerful.

    Elixir map pattern matching allows you to look inside the argument and use different function definitions based on the keys of the map. In this case we will only match #3 if we use a map as an argument, and that map has a key of :foo. If we want access to the rest of the map we can use the arg variable. We can pass any map containing the key :foo %{foo: 1,bar: 2} will match, but %{“foo” => 1} will match #4 because the key is a binary (string). When you are serializing data to javascript it is best to use binaries as strings. Binaries also have very powerful pattern matching capabilities you may wan to explore.

    For phoenix channels we need join/3, and handle_in at a minimum.

    def join("all",payload,socket) do
        #  socket.assigns.user is assigned in our Socket
        user = socket.assigns.user
        # register the login event with our Agent
        LogAgent.login(user) "User #{user} logged in: payload: #{inspect p}"
        # we can't broadcast from here so we call to handle_info
        send self,:status_update
        # return ok, and a "welcome" message to the client joining

    In this commit I have a defunct catchall def join, below I’ve fixed it to catch any joins with the wrong channel name. We could provide additional authentication checks in our first def join, and catch issues here.

    def join(any,s,socket) do
        Logger.error("unkown channel: #{inspect any} for assigns #{inspect socket.assigns}") 
        {:error, %{reason: "unauthorized"}}

    Next is handle_info which broadcasts to all clients who have joined our “all” channel

    def handle_info(:status_update,socket) do "handle_info :status_update"
        # broadcase!/3 sends an event "status_users" with the current state from our LogAgent
        # it wouldn't be a bad idea to throttle this for a large number of clients
        broadcast! socket, "status_users", LogAgent.get
        # we don't need a reply since we just used broadcast
        {:noreply, socket}

    I have added a few events in a number of handle_in/3 definitions. :status_update, “status_users”,”ping”,”hit”, and any_event They all work pretty much the same, any_event is a catchall for errors. Hit does the most work for our use case. Notable here is the use of send. This is generically the way Elixir processes communicate between each other. In this case we use self() which returns the current PID, and matches to def handle_info(:status_update,socket). You can read more about send here

    def handle_in("hit",p,socket) do "Hit from #{socket.assigns.user}"
        # update our state
        # call the broadcast for all connected users
        send self,:status_update

    Finally for our Channel we need to handle clients leaving. We define terminate/2 to update our state and user count

    def terminate(reason,socket) do
        # this test for assigns.user should never happen if our socket is doing it's job
        if socket.assigns.user != nil, do: LogAgent.logout(socket.assigns.user)
       "terminated: #{inspect socket.assigns}")
        # I added this because I had some client terminations not notify, need to dig into why.  The messaging should 
        # be asynchronus, so there is a chance the state is not updated when we call :status_update
        # broadcast to all connected clients
        send self,:status_update

    reflux phoenix websocket client

    Now that we have our server all wired up to talk to clients, we can dig into the client code. Reflux will be managing all data from the server, and the react components will send their updates to the server which end up propagating back down to reflux to update our state.

    First we add a new action called “hit”


    export default Reflux.createActions([

    Next we update our reflux store to connect to phoenix


    import Actions from "../Actions";
    export default Reflux.createStore({
      // binds our onSwap and onHit functions
      listenables: Actions,
      init() {
        this.test = true;
        // no logging
        //this.socket = new Phoenix.Socket("/status")
        // This creates our socket and sets up logging as an option
        this.socket = new Phoenix.Socket("/status",{logger: (kind, msg, data) => { console.log(`${kind}: ${msg}`, data) }})
        // lazily create a semi unique username
        var r = Math.floor((Math.random() * 1000) + 1); = "me"+r
        // these are our auth params which get sent to both connect/2 in our phoenix socket and join/3 in our phoenix channel
        this.auth = {user:,pass: "the magic word"}
        // this maps our params to our socket object
        // callbacks for varrious socket events
        // configure our channel for "all"
        this.user_chan ="all")
        console.log("chan", this.user_chan)
        // bind a function to any message with an event called "status_users"
        this.user_chan.on("status_users",data => {
          console.log("chan on hook",data);
          // blindy push data from server into our state
        // this is what actually joins the "all" channel.  When the server responds "ok" and the join is successful we can 
        // drive other events, we just log it here.
        this.user_chan.join(this.auth).receive("ok", chan => {
         // callback for any errors caused by our join request
         .receive("error", chan => {
      // pass our init() to React's state
        return this;
        console.log("onOpen",thing, this)
        // trigger is what will push our new state to React
      // This is bound by our Actions.js.  it pushes a message to handle_in("hit","hit",socket) which increments a hit counter
      // this is triggered in our onClick handler for BtnA and BtnB
      // our old swap action
        console.log("switch triggered in: ",x)
        console.log("TheStore test is",this.test)
        this.trigger({test: !x})

    We add a new component to handle our user status data


    import TheStore from "../stores/TheStore"
    export default React.createClass({
      // wire in our reflux store
      mixins: [Reflux.connect(TheStore)],
        // initial values in case the server is not connecting
            return({user_count: 0, hits: 0, users: []} )
        render: function() {
            var doItem = function(item){
              return (<span> name: {item} </span>)
            return (
                <div className="panel panel-default">
                    <div className="panel-heading">
                        Status: me: {} -- hits: <span clasName="badge">{this.state.hits}</span> 
                    <div className="panel-body">
                        Current Users: {} <span className="badge">{this.state.user_count}</span> 
                        Hits: <span className="badge">{this.state.hits}</span>

    Finally we can update our BtnA and BtnB components. They are very much the same, so I’ll only walk through one.

    import Actions from "../Actions"
    import TheStore from "../stores/TheStore"
    export default React.createClass({
        mixins: [Reflux.connect(TheStore)],
            return {"name":"BtnA"};
          // This triggers our onHit function in TheStore.js which pushes our event up to phoenix
            return (
                <button className="btn btn-danger" onClick={this.handleClick}> 
                    This is {}: val: {this.state.test.toString()} 

    That should be it! A working example can be found at

    %{description: "Using reflux.js react.js phoenix and elixir to connect components. This also describes the use of Elixir Agents, and Elixir's Behaviours.", title: "Wiring reflux and react to phoenix websockets"}

  • (Optional) See our JavaScript configuration variables documentation for ways to further customize the Disqus embed code.

  • react.js contentEditable from object

    this fiddle demonstrates a react.js component that takes an object as an argument and creates a set of content editable spans for the values, and labels them with the keys.

    a bit of quick annotation below. Essentially i use 2 props, name and value for each key/value pair. Only the value is editable.

    // this is our object, please excuese the vulgarity.  I was tired when I wrote this.
    var node1 ={
    owner: "i am the owner",
    where: "right here bitch",
    name: "Joe Super Fucking AWESOME"}
    // this is our main component which manages the changes to the content in our contentEditable spans
    var ContentEditable = React.createClass({
            return( <span id="contenteditable"
                contentEditable >
                </span> )
            var html = this.getDOMNode().innerHTML;
            // only fire if the html changes
            if (this.props.onChange && html !== this.lastHtml) {
                    // emit both the value of the change, and the key, as name here
                    target: {
                        value: html,
            this.lastHtml = html;
    var Item = React.createClass({
            return {result: ""}
        // this is where we'd do something with the update
        // mostly debuggin output to see how thing react to changes
            var x = this.state[];
            var o = {}
            o[] =;
            var keys = Object.keys(this.props.i)
                        var item = this.props.i[key]
                            <ContentEditable name={key} item={item} onChange={this.handleChange} />
            <button onClick={this.handleClick} >
                Pretend to send to a server or something
            <div>result<pre>{JSON.stringify(this.state,null, 2)}</pre></div>
    var MyDiv = React.createClass({
        //mixins: [Reflux.connect(TheStore)],
            return (
                <div> This holds our item! <br />
                    <Item i={node1} />
    React.render(<MyDiv />, document.getElementById('container'));

    %{description: "object driven contentEditable pseudo form", title: "react.js contentEditable from object"}

  • (Optional) See our JavaScript configuration variables documentation for ways to further customize the Disqus embed code.

  • aws eip eni elixir phoenix multiple network interfaces

    How to configure multiple interfaces on AWS ec2

    Quick notes on configuring multiple phoenix listeners on separate network interfaces

    AWS has a great feature that allows you to bind multiple virtual network interfaces (ENI) to a single instance. This allows you do several things, like migrate an IP between two hosts. I wanted to be able to run a dev server on another IP without having to fire up another instance.

    The ENI guide is a good place to start.

    Head over the AWS console and create a new ENI, ensure it is setup in the AZ you plan to use. Once it is created you can then bind it to an instance by right clicking on the new ENI and assigning it to the instance.

    If you need to map DNS, next go create a new EIP (Elastic IP), right click to associate it to your instance and select the ENI you just created

    EIPs are free as long as they are in use, so ensure you don’t leave them sitting around unbound.

    Now you shoule have 2 interfaces, and you can check like this:

    $ ifconfig -a
    eth0      Link encap:Ethernet  HWaddr 06:F0:45:89:00:01
              inet addr:  Bcast:  Mask:
              inet6 addr: fe80::4f0:45ff:fe89:1/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:9001  Metric:1
              RX packets:11229516 errors:0 dropped:0 overruns:0 frame:0
              TX packets:10642366 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000
              RX bytes:1340181266 (1.2 GiB)  TX bytes:4072465916 (3.7 GiB)
    eth1      Link encap:Ethernet  HWaddr 06:D0:99:9B:B1:85
              inet addr:  Bcast:  Mask:
              inet6 addr: fe80::4d0:99ff:fe9b:b185/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:9001  Metric:1
              RX packets:1697 errors:0 dropped:0 overruns:0 frame:0
              TX packets:519 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000
              RX bytes:109796 (107.2 KiB)  TX bytes:1332985 (1.2 MiB)

    Assuming you are using a VPC, you need to be sure you don’t use your public IP’s or EIP addresses. AWS uses 1:1 NAT in VPC and your instance knows nothing about your public addresses.

    Next you need to take a look at your config/#{Mix.env}.exs Below is my dev version (config/dev.exs), note the ip option 4 tuple, and that it is a tuple and not, but uses commas. This ip option is what gets passed to ranch and will bind your IP address. Ensure you change these for each env you need to adjust.

    config :reflux_eventbroker_react_phoenix_elixir, RefluxEventbrokerReactPhoenixElixir.Endpoint,
      http: [ip: {10,1,1,221},port: 8080],
      debug_errors: true,
      code_reloader: true,
      cache_static_lookup: false,
      watchers: [node: ["node_modules/brunch/bin/brunch", "watch"]]

    Fire up mix phoenix.server to ensure it works.

    When you are all done and everything is working you can run

    [ec2-user@ip-10-1-0-34 ~]$ sudo service iptables save
    iptables: Saving firewall rules to /etc/sysconfig/iptables:[  OK  ]

    %{description: "Quick notes on configuring multiple phoenix listeners on separate network interfaces", title: "aws eip eni elixir phoenix multiple network interfaces"}

  • (Optional) See our JavaScript configuration variables documentation for ways to further customize the Disqus embed code.

  • how to log all phoenix.js websocket messages

    Quick phoenix.js websocket tip, log all messages

    This works for phoenix 0.16.1, not sure if it will continue to function this way.

    just add this as a 2ns arg to your new socket.

    var socket = new Phoenix.Socket("/status",{logger: (kind, msg, data) => { console.log(`${kind}: ${msg}`, data) }})

    This helps if your chan.on(“foo”,… is not working and you want to see what is happening under the covers

    more detail in the phoenix.js source code

    %{description: "logging all websocket messages to console", title: "how to log all phoenix.js websocket messages"}

  • (Optional) See our JavaScript configuration variables documentation for ways to further customize the Disqus embed code.

  • word to markdown converter

    this demonstrates a very cool conversion of a word document to markdown. The best part of poking this for me was learning that markdown supports embedded images. This makes it easy to take a screenshot, paste into word, and then use this to convert it to markdown, including the pasted image. It doesn’t account for anything, things like the conversion of “2nd” in word can throw it for a loop. Being a terrible speller, and not taking the time to figure out how to spellcheck with vim, makes this a good tool for me.

    There is a heroku app here that you can use to play with it.

    %{description: "convert markdown to word", title: "word to markdown converter"}

  • (Optional) See our JavaScript configuration variables documentation for ways to further customize the Disqus embed code.

  • Using Reflux to broker events with React and Phoenix

    Using Reflux to broker events with React and Phoenix

    Draft 8-10-2015


    This quick tutorial has only been tested with the following component versions. Beware trying to get this working with other versions. Phoenix is not yet 1.0 and it may change enough to make this no longer work.

    Git Repo:

    Elixir 1.05 and Phoenix 0.15.0 components:

    Bower components for reflux 0.2.7, react 0.13.3


    I’ve been working with phoenix quite a bit lately for and and thought to explain the asset pipeline for javascript build tool neophytes like myself.

    React shares nicely until you start to break things up and isolate them (a good practice). This is a great blog post discussing how to get your components talking, but if you are isolating it doesn’t work. Reflux provides a nice model which can help you decompose your app, and manage events.

    Firstly I just want to say .js is a ghetto, and I look forward to a day where browsers can support more languages than just .js

    Phoenix has a very nicely automated setup for dealing with this mess. It uses brunch and inotify to detect and dynamically compile your assets. The pipeline looks something like this:

    You may want to look at my ec2 setup here if you don’t already have erlang, elixir, brunch, phoenix all installed.

    Also for reference is my repo for this tutorial.

    The first commit adds the files we need and creates a bower.json

    Then run bower install

    The next commit adds the changes we need for our brunch config to include the new bower_components

    If you run mix phoenix.server you should see the default phoenix app page, and notice that brunch is picking through your static files and compiling them for you. If you open your dev tools in your browser you should see the following.

    Our first step is to create a few conventional directories to store our reflux stores and our react components.

    Now we need to make a few react components to play with. You may want to use the react fiddle here to familiarize yourself with react. You can see the fiddle I used here.

    To componentize this we will break it into 3 thing React components and 2 reflux object.

    Here is the commit which I will break down below.

    I’m not sure if this is needed, but this removes your bower_components from your babel conversions.

    -      ignore: [/^(web\/static\/vendor)/]
    +      ignore: [/^(web\/static\/vendor)|(bower_components)/]

    web/static/js/Actions.js we just setup an action for Reflux to listen to.

    +export default Reflux.createActions([
    +  "swap"


    I removed the standard template and add a target div for React to bind to

    <div id="mydiv">
    our crap should go here

    Finally comes all our js code. We have the following components

    TODO: add graphviz

    app.js -> Actions.js

    -> stores/TheStore.js
       -> MyDiv.js
          -> components/BtnA.js
          -> components/BtnB.js

    Bower and brunch do all of the magic to include our js components into our app.js aggregated file. We only have to import the components we created above. app.js initializes React to target the div “mydiv” in our index.html.eex template. It also pulls in our code from the import MyDiv statement.


    // this is for phoenix and it's live code reloading, or web socket connections
    import {Socket} from "phoenix"
    // pulls MyDiv.js into scope
    import MyDiv from "./MyDiv";
      <MyDiv />,

    BtnA.js and BtnB.js are almost identical. They simply render a div with a button and a button label. They import our Actions and bind them to an event handler which calls Action.swap.


    +import Actions from "../Actions"
    // TheStore.js is our Reflux store
    +import TheStore from "../stores/TheStore"
    // create our react object, this exports the filename as the export name, in this case BtnA
    +export default React.createClass({
        // this mixin ties the state and events to our react component
    +    mixins: [Reflux.connect(TheStore)],
        // we simply initialize with the name of the component to be
        // printed to the console
    +    getInitialState(){
    +        return {"name":"btna"};
    +    },
        // this binds to the button onClick event
    +    handleClick(){
    +      console.log(,"clicked",this.state.test);
    +      Actions.swap(this.state.test)
    +    },
        // render gets called when the component is created or the state is updated
    +    render(){
    +        return (
    +            <button  onClick={this.handleClick}> This is BtnA </button>
    +        )
    +    }

    The mixins line does all of the magic. It binds the functions of the mixin to the react object, so we could call this.onSwap(arg) directly. It also does the work of calling the lifecycle methods from reflux [init, preEmit, should Emit].

    With reflux you can use Reflux.listenTo(store,”onEventToListen”), however we are using Reflux.connect which will update our components state to whatever the reflux store transmits (via this.trigger({stateKey: newState}). This can be limited to a state key if you want to filter on specific state keys. In our case we are listening to every event. Connect also will initialize the state of the component it is connected to via the Reflux’s getInitialState function. The initiliazation now looks like this

    TODO: add graphviz

    TheStore.init -> TheStore.getInitialState -> BtnA.getInitialState

    TheStore initializes to {test: true}, and that ends up merged with our BtnA state with a result of {test: true,name: “btna”}

    To fire up the app cd into the root directory (with the mix.exs) and run

    mix phoenix.server

    Go ahead and click either button. In the console you can see that clicking the button will change each components state for the state key “name”.


    Now that you can see how to componentize react and use reflux stores to broker events between your components, you can start to see how you might move away from having to talk to a REST API. Once you tie in Phoenix’s sockets, you can now just push all of your events down to your client, and per the name, have it react. Next up I will be writing about how to take this to the next level.


    Good examples for React:

    %{description: "Elixir Phoenix Framework React and Reflux. asset pipeline and events", title: "Using Reflux to broker events with React and Phoenix"}

  • (Optional) See our JavaScript configuration variables documentation for ways to further customize the Disqus embed code.

  • elixir phoenix ec2 setup

    commands i had to run to get ec2 ready for phoenix

    # dev tools to compile and build
    sudo yum groupinstall "Development Tools"
    sudo install ncurses-devel
    udo yum install java-1.8.0-openjdk-devel
    sudo yum install openssl-devel
    # install erlang
    tar -zxvf otp_src_18.0.tar.gz
    cd otp_src_18.0
    make install
    # get inotify
    tar -zxvf inotify-tools-3.14.tar.gz
    cd inotify-tools-3.14
    sudo make install
    # you may need to tweak inotify per if you get errors
    # elixir
    mkdir elixir
    cd elixir
    # add it to your path now
    # get phoenix
    mix local.hex
    mix archive.install
    # install node
    sudo yum install nodejs npm --enablerepo=epel
    # install brunch
    sudo npm -g install brunch
    # make phoenix app without ecto
    mix phorechat --no-ecto


    #  some effort to get port 80 forwarded, used iptables rules
    #  ignore the fail2ban stuff
    sudo service iptables status
    Table: filter
    Chain FORWARD (policy ACCEPT)
    num  target     prot opt source               destination
    Chain OUTPUT (policy ACCEPT)
    num  target     prot opt source               destination
    Table: nat
    Chain PREROUTING (policy ACCEPT)
    num  target     prot opt source               destination
    1    REDIRECT   tcp  --              tcp dpt:80 redir ports 8080
    Chain INPUT (policy ACCEPT)
    num  target     prot opt source               destination
    Chain OUTPUT (policy ACCEPT)
    num  target     prot opt source               destination
    1    REDIRECT   tcp  --              tcp dpt:80 redir ports 8080
    Chain POSTROUTING (policy ACCEPT)
    num  target     prot opt source               destination

    %{description: "setting up elixir and the phoenix framework on aws with amazon linux", tags: "ec2 aws elixir phoenix amazon linux", title: "elixir phoenix ec2 setup"}

  • (Optional) See our JavaScript configuration variables documentation for ways to further customize the Disqus embed code.

  • winsplit md5 and current good location

    winsplit md5: b7417d3e1db10db8e6c19caf69dfcc88

    I found an intact copy of 11.04 on

    Winsplit development has been stopped, but it still works great for Windows 8. Their site appears to be tyring to sell some other product, vs sharing the open source version.

    %{description: "winsplit 11.04 md5 sum and location", tags: "random, windows, defunct", title: "winsplit md5 and current good location"}

  • (Optional) See our JavaScript configuration variables documentation for ways to further customize the Disqus embed code.

  • cheat sheet for aws cmd s3 cp

    Assuming you have the AWS CLI installed, and if you are using Amazon Linux, it is installed by default, this is how you can copy a directory to S3 with public read permissions set.

    aws s3 cp build s3:// --recursive --grants read=uri=

    This is useful for my Obelisk blog which creates static html files from markdown. The docs for aws cp can be found here

    %{description: "recursive copy of directory to s3 bucket with everyone public permission", title: "cheat sheet for aws cmd s3 cp"}

  • (Optional) See our JavaScript configuration variables documentation for ways to further customize the Disqus embed code.


About this blog: This blog will discuss technology associated with my project. I am using Elixir and Phoenix for my backend, and React.js and Reflux for my front end. I have a library called Trabant to experiment with graph database persistence for Elixir. The views expressed on this blog are my own, and are not that of my current employer.

About Me: I am a hobbyist programmer interested in distributed computing. I dabble in Elixir, Ruby, and Javascript. I can't spell very well, and I enjoy golf.