The title says it all (surprisingly).

This is my test.

Test source code - a.c

#include <stdio.h>


int main() {
    int a = 0;
    printf("%d\n", 1 / a);
    printf("Done\n");
    return 0;
}

Build it for x64 (ubuntu 22.04).

$ lsb_release -a
No LSB modules are available.
Distributor ID:    Ubuntu
Description:    Ubuntu 22.04.3 LTS
Release:    22.04
Codename:    jammy

$ gcc --version
gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0

$ gcc a.c
$ ./a.out
Floating point exception (core dumped)

However, built it for aarch64 with android-ndk toolchain.

$ android/ndk/toolchains/llvm/prebuilt/linux-x86_64/bin/aarch64-linux-android33-clang a.c
$ adb push a.out /data/a.out
$ adb shell /data/a.out
0
Done

No error! Just gives ZERO!!

This is not a topic of Linux kernel. I would like to introduce one of popular way protecting Policy file for SELinux that used in lots of linux(including embedded linux) system.

systemd is widely user process that is widely used as init process at Linux in these days. systemd loads SELinux data from SELinux root directoy (/etc/selinux by default) if SELinux is enabled, at very early stage. And then services registered are started.

Here is sample mount status.

...(skip)...
overlay on /etc type overlay (rw,relatime,rootcontext=system_u:object_r:etc_t:s0,seclabel,lowerdir=/etc,upperdir=/overlay/etc,workdir=/overlay/.etc-work,x-systemd.automount)
...(skip)...

And you can easily find systemd service doing this mount task.

According to steps of systemd, policy of SELinux is loaded before /etc/ is hidden behind by overlay filesystem. So, original SELinux data can be safely protected from users.

This way is very popular way protecting original data from users. You can apply this trick to various cases for your system.

Reverse proxy by using nginx

Kibana supports basic authentification (See elasticsearch 7.4).
That is, we can use basic access authentication (See wikipedia for details)
Then, reverse proxy can be good solution for this issue. Here is sample configuration of Nginx for this purpose.


docker-composer.yml

version: '3'

services:
  reverseProxy:
    container_name: reverseProxy
    hostname: reverseProxy
    image: nginx
    ports:
      - 5601:5601
    volumes:
      - ./nginx:/etc/nginx

*nginx.conf*
events {
}

http {
upstream kibana {
server KIBANA.IP.ADDR.NUM:5601;
}

server {
listen 5601;
server_name reverse.kibana.com;

location / {
  proxy_pass        http://kibana;
  proxy_set_header  X-Real-IP           $remote_addr;
  proxy_set_header  X-Forwarded-For     $proxy_add_x_forwarded_for;
  proxy_set_header  X-Forwarded-Proto   $scheme;
  proxy_set_header  Host                $host;
  proxy_set_header  X-Forwarded-Host    $host;
  proxy_set_header  X-Forwarded-Port    $server_port;
  # ID:PW base64 encoded value (RFC 7617)
  proxy_set_header  Authorization       "Basic QWxhZGRpbjpPcGVuU2VzYW1l";
}

}
}

Introduction

3-way-merge(henceforth 3WM) is popularly used at SCM(Source code Configuration Management) tool. But, there is no standard way of implementation. And most of them is for text(source code). So, there are already lots of study regarding Text-3WM. But, for me, it's very difficult to find study regarding Json-3WM. Someone may think Json-3WM is subset of Text-3WM. And I think it can be one of possible algorithm of Json-3WM. But, is it enough? This article is for this topic.

Assumption

Array vs. Object

At Json, Array is considered as Object. At Javascript, typeof [] gives object. For example, [1, 2] can be represented as {0:1, 1:2}. In this article, Array is handled as one of primitive value, not an object - just like JSON-Merge-Patch(RFC 7386).

Merge

Way of merging two Json object is clearly defined at JSON-Merge-Patch(RFC 7386). Therefore this article is only focusing on 3-way-mergee.

3-Way Merge: Clear

In most common cases, it looks clear. For example,

Mergable

<base>
{
    "a": "b",
    "c": {
        "d": "e",
        "f": "g"
    }
}

<our>
{
    "a": "b",
    "c": {
        "d": "e",
        "f": "g",
        "h": "i"  # <=
    }
}


<their>
{
    "a": "b",
    "c": {
        "d": "e",
        "f": "z"  # <=
    }
}

<merged>
{
    "a": "b",
    "c": {
        "d": "e",
        "f": "z",  # <=
        "h": "i"   # <=
    }
}

Conflict

<base>
{
    "a": "b",
    "c": {
        "d": "e",
        "f": "g"
    }
}

<our>
{
    "a": "b",
    "c": {
        "d": "e",
        "f": "z"  # <=
    }
}


<their>
{
    "a": "b",
    "c": {
        "d": "e",
        "f": "y"  # <=
    }
}

3-Way Merge: Not clear

This is main topic of this discussion. Consider following example

<base>
{
    "a": "b",
    "c": {
        "d": "e",
        "f": "g"
    }
}

<our>
{
    "a": "b",
    "c": {
        "d": "e",
        "f": "z"  # <=
    }
}

And

<their>
{
    "a": "b"
              # <=
}

It seems conflict. Then, which property is conflicted? /c or /c/f?

Case: /c/f (Incorrect)

In this interpretation, merged result with Accept-Our option (resolve strategy for conflicted property) should be

<merged: accept-our>
{
    "a": "b",
    "c": {
        "f": "z"  # <=
    }
}

What is issue of above? In this case, there is no difference from below version of <their> object.

<their>
{
    "a": "b",
    "c": {}  # <=
}

In case of above version, everything looks clear. Definitely, /c/f is property conflicted. Then, Is it natural that two different changes gives same merged result? I think No. Therefore this is not correct interpretation of given example.

Case: /c

In this case, interpretation of changes are

  • <their>: Deleting /c.
  • <our>: Changing /c/f and /c(sub-property of /c is changed).
    So, /c is conflicted and merged result with Accept-Our option (resolve strategy for conflicted property) should be
    <merged: accept-our>(same with <our>)
    {
      "a": "b",
      "c": {
          "d": "e",
          "f": "z"
      }
    }
    Whole sub-object /c is resolved with /c of <our> object because of Accept-Our merge strategy.

Summary

This article describes private opinion related with 3-way-merge algorithm of Json object. There is NO correct answer. This is just one of suggestion.

Fabric has great document for it.
See https://hyperledger-fabric.readthedocs.io/en/release-1.2/build_network.html#

But, once you try you own network from scratch, you will face lots of unexpected obstacles.
So, in addition to this guide, I would like to mention some more information. If you are not familiar with docker and docker-compose, you may confuse what is exactly happending inside of byfn(Build Your First Network). Therefore, before staring building your network, please understand what is docker and docker-compose.

One important thing regarding docker-compose is, by default docker-compose uses user-defined bridge network mode. And, instantiation CC means running docker-container. Problem is, container for CC also should join to network in where peer is joined.

Environment variable CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE is for it!

You should NOT forget this!
Without this, container cannot connect to peer nodes!

Environment: Hyperledger Fabric v1.2

Fabric uses 'npm install --production' to build chaincode docker image.
(https://github.com/hyperledger/fabric/blob/release-1.2/core/chaincode/platforms/node/platform.go#L188)

And run CC by using 'npm start -- --peer.address'
(https://github.com/hyperledger/fabric/blob/release-1.2/core/chaincode/container_runtime.go#L147)

So, only following files are needed to be released

  • javascript files
  • package.json
  • package-lock.json

In case of NodeJs, javascript files should be deployed to peer nodes. So, usually uglifying and optimization is required.
And webpack is most popular tool for this requirements.
And in case that all source codes are bundled to one file - bundle.js - then, releasing only three files are enough!

  • bundle.js
  • package.json
  • package-lock.json

And 'npm start' may look like 'node bundle.js'.

Hyperledger Fabric v1.2


./users/Admin@example.com/tls/client.key   (*E)
./users/Admin@example.com/tls/ca.crt   (*b)
./users/Admin@example.com/tls/client.crt   (*E)


# AdminMsp {

./users/Admin@example.com/msp/cacerts/ca.example.com-cert.pem   (*a)
./users/Admin@example.com/msp/tlscacerts/tlsca.example.com-cert.pem   (*b)
./users/Admin@example.com/msp/admincerts/Admin@example.com-cert.pem   (*c) (Auth. by (*a))
./users/Admin@example.com/msp/keystore/a331923a189a86dca0832b2841e077a3701cfe0063d6792e88f593a243c3338b_sk   (*C)
./users/Admin@example.com/msp/signcerts/Admin@example.com-cert.pem   (*c)(*C) (Auth. by (*a))

# } // AdminMsp



./orderers/orderer.example.com/tls/server.crt   (*F)
./orderers/orderer.example.com/tls/server.key   (*F)
./orderers/orderer.example.com/tls/ca.crt   (*b)


# OrdererMsp {

./orderers/orderer.example.com/msp/cacerts/ca.example.com-cert.pem   (*a)
./orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem   (*b)
./orderers/orderer.example.com/msp/admincerts/Admin@example.com-cert.pem   (*c) (Auth. by (*a))
./orderers/orderer.example.com/msp/keystore/caec6e5979df95564f57b0a50174cc5004977cbdf3f667d1b9f1e1d5128ff490_sk   (*D)
./orderers/orderer.example.com/msp/signcerts/orderer.example.com-cert.pem   (*D) (Auth. by (*a))

# } // OrdererMsp


./ca/ca.example.com-cert.pem   (*a)(*A)
./ca/74652114cc0ad0b0b9c3365af181f9dc3e240fbe8373ec30f3cd10de4bde6221_sk   (*A)

./tlsca/tlsca.example.com-cert.pem   (*b)(*B)
./tlsca/534fde3483b518f3887590a10553f6b274d23692eee5bf5976f9be4943c3926d_sk   (*B)

./msp/cacerts/ca.example.com-cert.pem   (*a)
./msp/tlscacerts/tlsca.example.com-cert.pem   (*b)
./msp/admincerts/Admin@example.com-cert.pem   (*c)


-------------------------------------------------------

NOTE:
- (*a) (*b) ... are same files.
- (*A) (*B) ... matching crypto (pub/priv matching)


To run operations requiring ‘admin’ policy, ‘AdminMsp’ is required.
That is, cli-node or peer-node having ‘AdminMsp’ can run admin operations.
(CORE_PEER_MSPCONFIGPATH should be path to AdminMsp)

Note: This review is written while closing very small projects with Hyperledger Fabric(henceforth HLF).


You can easily find articles regarding pros and cons of HLF network architecture - ex. consensus algorithm, permission and so one - comparing with others - ex. Bitcoin, ethereum. So, in this articile, I would like to discuss things in a point of developing Chaincode.

Updating same state at multiple transactions

I think most important and biggest difference in terms of developing Chaincode between HLF and other popular networks, is


In case that several transactions try to update same global state, only one transaction is allowed in a block!


This comes from architectural design of HLF - Orderer(I think this is a kind of debt for hight TPS). Even if time interval of creating block, is very short comaring with other networks, this is very serious constraints. For example, in case of money trading system, only one transfer-transaction is allowed in a block for same account. Because of this characteristics - high TPS with contraints described above, HLF is very good for assets having it's own ID like house, products and so one. But, it's not good for assets requiring couting and calculation like money.

To overcome this constraints, HLF suggests sample codes at their <fabric-sample>/high-troughput. But I think this is not enough. For example, to know current balance, transaction have to read all accumulated variables. And to transfer money, one variable is added to global state. That is, still only one transaction is allowed in a block because read-set to calculate current balance is updated(added)!

Other Misc.

I think followings are not HLF specific issues. But to me, these are also annoying. And I can't find any good way to reduce these pain points at SDKs or libraries for Chaincode.

  • Read/Write operation on global state is very expensive. And reading variable that is updated in same transaction gives it's original value(not updated value)
  • ECDSA is algorithm for digital signature, not for en/decryption.
  • Testing Chaincode requires HLF networks. So, test and debugging cycle takes longer time than expected.

I hope that HLF team provides good solutions for them. And until then, I hope my sample template - hlfcc-node-starter - is helpful to other developers using typscript as their Chaincode-language.

 ngrx/store 6.0.1 / Angular 6.x


 ngrx/store 를 처음 써 보는 중에 만나 이슈인데... 어찌보면 당연하긴 하겠지만... 뭔가 magic을 기대하기도 했는데.. 어쨌든...


createSelector(<state>, <action>) 이렇게 될때,  <state> 는 하위의 변화를 detecting하지는 못하는 걸로 보인다. 즉.

state: {
    inner: {
        a: true
    }
}

의 경우

createSelect(state => state.inner, inner => inner.a)

이렇게 해 놓고, dispatch를 통해서, state.inner.a = false 로 하면
selector의 Observer가 fire되지 않는다. 즉, inner.a 는 바뀌었으나, state.inner는 바뀐게 아니라는...


이때 states reference 자체가 reducer에서 바뀌는 경우.. 즉

return state;

가 아니라

return {...state}

가 되면 state의 reference다 바뀌므로 Observer가 fire된다.



Environment: Angular 6.x, Angular Material 6.x


Tooltip애 show 된 이후, Tooltip value에 undefined가 들어가게 되면, 이후 tooltip이 비활성화 되는 것으로 material tooltip component가 처리하는 것으로 보인다.
따라서,
    - html -
    [matTooltip]="myTooltip"

이렇게 한 경우,
tooltip이 'show'된 이후, myTooltip값이
    'a' => 'b' => 'c'
이 순서로 바뀌면, 'c'가 tooltip으로 잘 보이는데,
    'undefined' => 'b'
순서로 replace하면, 'b'가 보이지 않는다. 첫번째 undefined로 인해서, tooltip이 disable되어 버리는 것 같다.

이 경우, mouse를 leave했다가, 다시 enter해서 tooltip이 다시 'show'될 경우, 'b'가 보인다.
즉, 현재 show session에서는 update되지 않고, 다음 'show' session에 보인다.
이것은
    'a' => undefined => 'c'
순서로로 변경할 경우도 마찬가지다 - 당시 show session에서는 'c'가 보이지 않는다.



'Domain > Web' 카테고리의 다른 글

[Angular] ngrx/store state 변화 이해하기... (기초)  (0) 2018.07.11
[Angular6] Inter-module communication.  (0) 2018.06.23

+ Recent posts