Simple Talk about – MVVM pattern

Personal History about MVVM

About a decade ago, I started to learn client app programming. The first framework that I met was Microsoft WPF. It had a very powerful developing approach that was MVVM. Microsoft provided a series of original syntax and toolkits to help developer implement a client app with MVVM. You don’t only write a WPF client using it, but also Sliverlight and Windows Phone app. So, a C# programmer could easily develop client in different platforms with a same framework. A new markup language, XAML, which was offered by Microsoft, was used as describing UI; C# or other .Net programming languages could be responsible of business logic code. Although Sliverlight and Windows Phone have come down nowadays, the story of MVVM is just beginning.


Because the ecosystem of Windows Phone didn’t survive and grow, I transferred to be an iOS developer six years ago. However, I felt that the developing environment of iOS was primitive – Objective-C is a 50-year-old language and using Xib and Storyboard to describe View layer is a puerile way compared with XAML. The Storyboard will be chaotic and often stuck, while there are a lot of pages that have complex logical relations in a project. Most my colleagues, therefore, develop UI component using programming way rather than Xib and Storyboard, but it could make that the ViewController takes charge of both View and Controller in MVC architecture and it maybe have massive codes in a class. As well as a ViewController needs to maintain the correlation between Models and Views, you must write many equal symbol in your code to keep them update, so it has become a big pool coupling much logic. The samiliar problems also exist in other platforms, which makes MVVM appear in people’s vision again.

In the following years, vue.js, Rx, flutter and SwiftUI all make effort to popularize MVVM.

What is MVVM?

There are three layers in a MVVM architecture – View, ViewModel and Model. Most articles explain the relation of layers referencing this following diagram:

  • View: describing the UI of components or pages. The information and animation is shown and interaction with users is offered in this layer. It’s merely binded with ViewModel, which means ViewModel is View’s unique data source – the data includes text, size, image etc. – and the users’ operations also trigger the event of ViewModel. So, View only does two things, declaring components and appointing their data source and event.

  • ViewModel: It has two important things, property and event.
    The ViewModel properties are always dynamic data, which means they are often changed. So, the setter method of property needs to be written. While a property is set, such as due to network response in Model layer, a signal will be published to an observer that is used to refresh View’s relevent data automatically.
    Besides, ViewModel can be invoked by View’s events, such as gesture, toggle, pressing button etc. These events also can change ViewModel’s properties . Meanwhile, ViewModel will update data of Model.
    Thus, Model was decoupled completely from ViewModel.

  • Model: It’s abstract model for non-UI data, though perhaps some fields are as same as ViewModel. Moreover, some foundemental data processing and business logic could be written in this layer, such as network request, database operation, communication with keep-alive server.

This architecture has many advantages for from-end development.

  • It separates the UI and business logic. Designers could directly import their works to project as View layer from some design software like Sketch, since independent View layer doesn’t demand any programming logic. After that, the programmer just need to set the data source from ViewModel in View. Designer’s and developer’s job is distincted clearly in the structure of organisition, which they only pay attention to own fields in a software engineering. In order to archieve this, Microsoft published Blend for designers in its MVVM framework, which is used to design WPF, Windows Phone’s UI, product animation like Adobe Flash and event debug the app.

  • ViewModel helps View and Model decouple. You don’t longer pair complex relations between View and Model in a very large Controller. Modifying any layer won’t influence maginificantly each other. As well as ViewModel and Model could be reused in different project whose logic is similar, event you can easily redevelop a Mac application, if you have had a same business iOS app.

  • The observer pattern provides automatic way to bind data, which reduces much workload of development and possibility of bugs. The developer will care data and workflow more rather than how to control View.

Nevertheless, there are serval disadvantages of MVVM:

  • It’s hard to learn for beginners, especially the framework is not provided originally such as RxSwift. Although I have developed Microsoft’s MVVM framework for many years, the RxSwift document still make me boring. I think developing a framework that is easy to understand based on UIKit is a hard work. Fortunately, flutter and SwiftUI are published.


  • It has a bit of complexity of binding data for some project. In some small projects, it actually brings more workload in binding, and some tiny groups don’t have a lot of memebers, so the way of distinctive View and ViewModel is unnecessary.
  • The observers maintain a huge hashtable containing observable objects, which increases more memory consumption.

In conclusion, whether chosing MVVM in your project or not, is dependent on many factors. Though these backwards will impact some people’s decision, I prefer to use it because of its actractive features. Flutter and SwiftUI both are good options for client developers.


What is ChalkSeven?

ChalkSeven is a drop 7 game developed with SwiftUI. You can drop down a ball with number to a proper location into the grid. If a ball's number equals the count of continuous balls in its row or column, the ball will explode. Besides, there are two special balls, solid ball and pending ball. If a solid ball nearby a ball exploding, it'll be a pending ball; if a pending ball nearby a ball exploding, it'll be a normal ball with a number.

In every level, you have 20 opportunities to drop balls. When you finish your all balls, a row of solid balls will emerge from the bottom to thrust up the balls in the grid. If these balls could be pushed out of the grid, the game will be over.

Have fun!

The git address is


  • iOS 13
  • Swift 5.2
  • Xcode 11

JLPinBubble Mergeable PinBubble with SwiftUI

What is JLPinBubble?

JPinBubble is a control that contains a series of pin bubbles which can be merged and divided when they are closed to each other. It could be applied to map apps as indicator of specific locations. As users pinch the screen to zoom the map in , the two overlapped bubbles will merge into one, vice versa.

The git address is


  • iOS 13
  • Swift 5.2
  • Xcode 11

How to use it?

JLBubbleCanvas should be declared in your View's body. You can define the bubble's tap action in the initial function, and it has a viewbuilder that you can add the background view on the canvas. For example:

JLBubbleCanvas(bubbleTapAction: {bubble in
            Image("image name")

JLBubbleCanvasViewModel is the view model of JLBubbleCanvas. It declares the bubbles array, bubble's size, the switch of showing number and the background image's name. In addition, serval functions that control merge and division are defined in this class. There are two functions provided for the View:

func judgetOverlap(_ scale:CGFloat)
func judgeDivided(_ scale:CGFloat)

JLBubbleViewModel is the view model of JLBubbleView that is used for showing a pin bubble. A pin bubble contains a image as bubble's background picture, a text, a num and its position information. While two bubbles merge into a new bubble, the new one's subBubble will put them into itself in case they could be restored.


What is SpringRadio?

SpringRadio is an online radio player with SwiftUI. The player control's background colour could change depending on station's image. Tapping the image on player control, will go to the detail page where there are some floating Text animation. The orientations include horizontal and vertical and it can randomly display. The inspiration comes from Microsoft MP3, Zune HD.


The detail page:

The git address is

About Sound Wave of Realtime Audio

The effect of sound wave derives from AudioSpectrum. Its developer describes the details in his blogs. The essential is analysis of the audio's PCM buffer to calculate a Hanning Window graph. Thus, how to seize PCM buffer from a realtime stream audio is a question, because AVPlayer can't do that.

I modified AudioStreamer that is an audio player with AudioToolbox to implement it. This player contains a Downloader to send a HTTP request to get audio data. Then the Parser can collect these data and process them. While AVPlayNode is commanded to play the audio, the Reader will read the packets collected from Parser. After that, Reader create AVAudioPCMBuffer based on these packets, and it send the buffer to PlayerNode. The AudioEngine could callback the buffer that is playing via installTap function of its mainMixerNode.

Like this, these pieces of buffer can be put into AudioSpectrum's RealtimeAnalyzer to generate the sound wave data, and SpectrumView will render them to display the wave.

Since AudioStreamer is not originally designed for realtime audio, the Parser collects the data downloaded until request completing, which causes memory issue for online radio. So, I added a strategy, which the Reader will remove certain numbers of packets of Parser in case this problem, while the Reader consume certain account of data. Besides, while the PlayNode schedules the buffer, the function will check if there are enough packets to be read in case audio stalling.


  • iOS 13
  • Swift 5.2
  • Xcode 11


I referenced these projects:


You can add other online radio items from this website

Architecture of Complex Live Stream Controller

Problems & Analysis

In a live stream application, the live stream studio viewController is the most intricate page where there are lots of configurable widgets of kinds of businesses. In my past experience of working, I met that there are nearly 100 widgets on one page and more than ten different types of studio pages. Hence, the code of this page will hard to maintain and sustainable develop, even cause software disaster of management, if only one viewController manages all business logic. But it's the true about our project, which more than 20,000 lines code was written in one file, before I refactored it. It was very terrible.

Indeed, the live stream page is very complex, so it need to be carefully analyse. The studio businesses could be classified according to their types:

  • A. studio for host of live video programme
  • B. live of single host for audience
  • C. live of single host for audience with shopping
  • D. live of hosts loop for audience
  • E. live of 1v1 hosts PK for audience
  • F. financial live for audience, e.g. stock live
  • G. one audience could stream and interact with host
  • H. playback video for audience
  • I. quiz online
  • J. remote claw machine
  • K. more ways…

Otherwise, there are massive functions in a studio page and some of them are decided to show or not by server configuration:

  • watching video (live streaming or playback)
  • sending comments, show comment list or barrage
  • showing hosts' information, such as id, name, rank, vip audiences, gold coins, number of audience online, etc
  • giving host presents and displaying their animation
  • tipping hosts some money
  • buying commodities that hosts publish
  • sharing video
  • recording video
  • swipe the screen to switch a live
  • browsing rank list
  • some games or activities
  • showing lite mode that only displays video without any widgets
  • rotating horizontal video (most videos are vertical display and fill phone's screen)
  • tapping screen to praise hearts

  • These studios and functions are maintained by different business groups. This diagram reveals our team's organisation structure.

But some basic and abstract functions are common to these types.

  • getting studio information from server.
  • playing video (live streaming or playback).
  • keep-alive message system, except playback.
  • detecting swiping gestures if needed.
  • the status of studio displaying, such as closing, entering, temporarily leaving.
  • rotating screen orientation if needed.
  • a docker view that shows configurable buttons at bottom, such as commenting, sharing, recording, sending presents.
  • a host information view with host's audience list.
  • a close button at the top-right corner.

These foundational works should have been done in a core business class rather than mixed with other more specific functions in one class. So, next part I'll talk about my main idea about re-design this studio page.

Core Idea

1. What is the widget?

A widget may be a button, a list, a docker, a dialog box or an animation. In some cases, it is a View providing touching and gesture; in other cases, it is a Layer. Thus, a widget should just met some common protocol and could be plugged in studio's certain position. A good design of widget API should be simple and not care developer's own implementation.

So, I defined a StudioWidgetProtocol including several basic and common methods for a widget. In this way, all widgets could be regarded as instances of id. The group members didn't need to modify too much pieces of code to adapt the new architecture, just implemented serval necessary interface methods.

You can consider a widget as a node, it could be plugged in a container view or father widget and have own children. Besides, the level as sort of priority is introduced to this interface, which could be defined as an arranged way in certain order, such as Z axis arrangement and from left to right. The frame of widget shouldn’t be considered in this interface, the business group members should operate it and implement other inner logic in widgets' own classes. The studio is merely responsible for adding them on proper levels and deals with the relations between widgets and studio.

However, I know that many colleagues prefer to use View as a widget in general situations, therefore I provided a StudioWidget class which has implemented StudioWidgetProtocol and added a bool switch, isGesturePenetrate, in case some gestures are intercepted by widget.

2. Distinct layers for Z axis

The studio could be neatly divided into five layers on Z axis based on studio's function. From bottom to top, they are:

  • backgroundView [0, 100) The backgroundView contains host's default image as studio's background.
  • displayerView [100, 200) The displayerView includes the video player control.
  • movableView [200, 300) A vertical swiping gesture could be operated on screen, which swaps the mode between lite and non-lite. The lite mode means there is nothing widget above the video player, except an orientate-screen button, a lite-mode-switch button and other a few widgets. Most widgets are added on the movableView, such as comment list, user information, activities icon.
  • liteView [300, 400) The liteView contains a lite-mode-switch button and other a few widgets.
  • floatView [400, ∞) The dialog boxes, some tip information and animation are popped over on the floatView.
    This design can make sure that different widgets and demands don't wrongly overlap and mix together. Every layer is defined a level range so that these widgets are plugged into appropriate layers according to their level.

3. How to process events of widgets and studio?

There are two solutions about event transmitting in front of me, with the unique message bus and the broadcast of peer to peer.
Let's see the advantages and disadvantages about both of them.

  • Message Bus I demand to define a hash table to buffer a message queue and transfer all events. Every body of message must have content, origin and destination. The bus is easy to be devised and the centralise module is very neat. Meanwhile the client SDKs all provide convenient kits of bus, such as notificationCenter. Nevertheless, that means every class in project could register as a message receiver as long as it knows the event name, which is not safe.
  • Broadcast This means every widget and studio has own announcer and can become as a listener itself. The announcer broadcast message to all listeners that have subscribed this event. Although there is a little bother to implement their own announcers for developers, the instances of listeners are explicit and all relations of events could build and manage in a studio controller.

Finally, I prefer to the latter one. I expanded the StudioWidgetProtocol to StudioWidgetProvider which claimed eventAnnouncer, and every studio has an eventAnnouncer as well. An eventAnnouncer hold up a hash table to collect all listeners. Once an event is triggered, the listeners' delegate methods will be called back and process messages. This is a relation of n to n. I defined a universal method to response events, void announcerOnEvent(string eventName ,id sender, List userInfo).
Of course, defining a single instance of message bus is also a good and simple way.

4. A studio controller and good inheritance

A studio controller is not a viewController. It should be retained by a viewController and maintains a studio life cycle that includes all life cycle of a viewController. A studio workflow decides this controller's life cycle, and the below flow chart shows the common workflow of most studio types.

Meanwhile, the these steps of workflow should be divided into unambiguous classes contained in a good inheritance. Every class has its clear duty.

The root class of studio controller has only one studioWidget as root view. It's like a canvas on which entire tree structure of widgets is put. Additionally, it's added in the View Controller's view so that the studio also could be set as the part of page but full screen.

The base business class claims main life cycle of studio, and adapts a lot of virtual methods for business group overloading. These methods are the ruled timing provided for developers. For example, there are some studio operation methods that define the timing of load studio, will leave studio, did leave studio. As well as the group members could flexibly consider to use which msg system or video player kits.

The touchable view implements three gestures — vertical swipe, horizontal swipe and tap for changing studio, switching lite mode and praising hearts respective. Their response events are opened in this class, which includes swiping distance, event status updating.

The core business achieves some common functions based on above classes. The Z axis layers are defined in this class so that the widgets could be added into studio in this class. Developers, otherwise, could conceal some widgets that don't met their demand, just need to put the widget's level in a concealing list.

The basic business group could inherit the core business class to add some common widgets for most studio types. Then, the other groups could accomplish own studios types based on this architecture.

Main UML of Architecture

How to abstract an universal model of widgets and comprehend the core of a studio are the key of the design. A good interface could uncouple the logic between studio and widgets. The groups just need to concentrate themselves and their business shouldn't be impacted by others logic. Besides, the studio should be a controller to manage the relations among widgets and between widgets and studio in a MVC pattern.

This architecture still has many problems such as performance and some detail of development isn't considered, for our team needed to rapidly shift to new architecture in short time. Indeed, it has much space that could be optimised.

iOS Graphics: Workflow of Graphics System

The iOS graphics series will introduce some content about principle of graphics processing and methods keeping iOS screen fluency, which include screen refresh essential, reason of display stalls, display performance optimising, openGL, AsyncDisplayKit, SwiftUI, etc.


The vision of real world is continuous, but it's not in the virtual digital realm, as the image data is always numerable. For a screen or other display devices, the graphics are only combined by lots of tiny cells that can show RGB colour and are arranged as matrix. People will mistake that the screen are continuous, as long as the cells are enough little. We called these cells as pixel.

Let's look at the definition: A pixel is the smallest addressable element in an all points addressable display device; so it is the smallest controllable element of a picture represented on the screen. Perhaps the words are boring, so let's see some practical instances. For the same size images, have more pixels, they're clearer, since picture could show more details. Like calculous, the image are divided into very tiny parts that approach unlimited little. We use the resolution to measure the number of pixels in an image. We would think this is a mosaic if the resolution of a picture is low, but if it's very high, we would feel the picture is real.

The regular resolutions are 720p(HD), 1080p(Full HD), 4k.

How Monitors work?

Then, how the pixels are displayed on a screen one by one? This is a long story, we can talk about from CRT monitors.

There is an electron gun in a CRT monitors. The gun doesn't scan it randomly but in a designed fashion. The above figure shows the way. Firstly, it scans from left to right, when the beam reaches the right-hand side of the screen it undergoes a process aka horizontal flyback. While the beam is flying back, it is also pulled a little way down the screen. The gun keeps on repeat this process until the beam reaches the bottom-right side. While this process finishes, a frame of picture is represented on the screen. Then it flies back the initial position for next scan.

The monitors or other display devices use a hardware clock to send a series of timer signals, which is used for synchronisation between display process and video controller. The monitor will send a signal, horizontal synchronisation aka HSync, when the electron gun enter the new row and prepare to scan. After the frame has been drawn and gun has reset, before the next frame ready, the monitor will send a signal called vertical synchronisation aka VSync. In the most cases, the frequency of VSync sent is fixed.

Nowadays the LED/LCD screens still follow this principle. Like this, all pixels are drawn on the screen and keep on display. If the resolution between image and screen could match, the pixels will be displayed by point to point, which means each image pixel data could map each screen's colour pixel light. However, if not, a several of colour pixel lights would display in proportion to map one image pixel data.

The frequency of most iOS device such as iPhone and iPad is 59.97Hz, and the iPad Pro can reach even 120Hz.

Why we need GPU?

Although the CPU has helped us do much jobs on programme processing, the GPU perform better, because the GPU is good at computing simultaneously a mass of float digital operations. GPU has hundred, even thousand stream processors, which is an absolutely different design of architecture with the CPUs whose the number of processors is even less, most of them with only 6 or 8. Every stream processor is an independent calculator and just concentrates graphics computing. Graphics processing needs this feature, as it could be regarded as a massive and complex arithmetic test.

So, the CPU and the GPU need to cooperate together during a frame rendering. The CPU prepares and initialises frame data, and then write them into a shared buffer that provide for the GPU to read and shader. About synchronising the CPU and the GPU work in iOS, you can reference Apple's documentation: synchronizing_cpu_and_gpu_work

The graphics system workflow

This figure illustrate a workflow of image processing.

The CPU is in charge initialise the instances of image model and update data in a shared buffer. After that, it will encode commands that reference the buffer instance and commit these commands. At this time, the GPU could read data from the shared buffer and execute the commands. These jobs are added in a CPU queue and a GPU queue respectively to conduct in order to protect that frames could be continuously rendered. This is a producer-consumer pattern, the CPUs product data and the GPUs consume it.

Generally, the GPU puts the consequence into a frame buffer after rendering a frame. A video controller read data from this buffer line by line according to VSync, and then the picture is shown on the monitor.

However, this is the simplest model that there is only one frame buffer. The video controller have to wait until that the frame buffer completes being written by GPU. This is a big efficient problem which could lead to stalls, since it is possible that the video controller is still wait for frame buffer, while the monitor has finished scanning. Thus, graphics systems usually set double frame buffer to solve this problem.

Double frame buffer & its problem

The structure of double frame buffer adopts a swap mode to optimise efficiency. The GPU could pre-render a frame and put it into buffer1, and the video controller would first read it. While the next frame has rendered and written into buffer2, the video controller will indicate buffer2 and read it. Meanwhile, buffer1 will be erased and rewritten by GPU for a new frame. These two buffers keep on swapping states between writing and reading. Like this, the video controller doesn't need to wait.

It brings a new problem, although it improves the system. If the video controller hasn't read, which means the monitor maybe just show a part of frame image, and the GPU has submit the next frame and buffers have swapped, the video controller will draw the rest of new frame on the screen. This always causes picture tearing.

The graphics card firms usually provide a function called V-Sync (AMD call it FreeSync). You must see it on your game or system's graphics configuration, if you are a game player. This is an effectual method to avoid tearing. The GPU needs to wait until a V-Sync signal from monitor to render a new frame and update the frame buffer. Nevertheless, it wastes much computing resource and the waiting maybe make frames delay.

Reason of display stalls

In the iOS, graphic service will notify Application by CADisplayLink after getting VSync signal. At this time, the task such as initialising image data and computing layout will be added in the application's main thread, and the GPU will execute next task such as converting and rendering in its own thread. The consequence, a new frame, will write eventually into the frame buffer.

The interval of signal dependencies on the refreshing rate. After each interval the video controller will read the current data from frame buffer. If the frame is the newest and complete, it will be shown; if the CPU or the GPU haven't submit their data yet, processors will go on their jobs, main thread won't be added a new job until next valid opportunity, and the screen will keep showing the last frame, which is the reason of display stalls.

In short, whatever the CPU or the GPU, both of them spend too much time more than the interval between two VSync, the display will stuck. Thus, the application have to reduce resource consumption of the CPU and the GPU.

A Brief History of Human-Computer Interaction

Computers are like magic boxes that reveal various extraordinary features and amazingly change our lives. Nowadays, we use laptops, mobile phones and the other fresh even smarter devices every day, which makes that we find they are our essential partners and even friends. Perhaps we have been used to using them, but do you give attention to how we operate them and how the ways of interaction changed in history? In case you have no clear idea about that, this article will introduce a brief history of Human-Computer Interaction (HCI).

As a type of interactive tool, computer could be operated to input information by users, as well as could reflect and feedback to people. Interacting with computer, like human's conversation, we question to others and could always get certain response. If I ask someone what the weather is like tomorrow, he maybe answer sunny or I dunno. The interaction is also a kind of conversation whose methods don't limit in speech between human and computers. It can let computer understand what you want and what it should do. So the history of interaction is a history of methods of conversation between us and computers as well. I'll talk about how we operate computers from the birth of them to future possibilities. Meanwhile, I'll also describe the changes on computers outputting information, showing result and responding to us these decades. Finally, I'll list some changes of ways that human communicate to each other influenced by HCI and networks.


The era of mechanical operation

At the age of computer beginning, it was regarded as a big, complex calculating machine. Therefore, people operated computers as common machine. While Some of these mechanical operations are obsolete, a lot of them still remained to current day.

  • Real Button/Switch
    I believe that we hardly find a digital device without a button. Although some buttons on monitors are replaced by touchable buttons, buttons are still exist. Whatever it is a mechanical button or a touchable button, we consider both of them to real button relative to the virtual button on GUI.
    To a digital device, we need at least one button to turn on or boot it. I'm not certainly to tell you the specific time when the first button or switch on computer was produced, but it must be the one of oldest methods of HCI.
    A button could offer three types of basic operation – press, long press, double press. There are also some buttons, like switches, that have two status – pushed or depressed. We could see all these ways of operation on an iPhone, such as power button in default setting – press is locking/unlocking screen, long press is popping the Siri, double press is showing the wallet.

    Of course, combinations of buttons could bring more operations, like power button plus sound louder button is taking a screenshot on iPhone.
    This operation method is very user friendly, since people can easily find out the location of button by trying to grope, and get physical pressing feedback when push it, which makes users to feel safe in mind and trust it. Even in some problems with device, real button is more reliable, for its principle is simple and hardly happens bug. Thus, we often long press the power button to restart, while our smart device doesn't work.

  • Knob
    The Torpedo Computer was an early electromechanical analog computer during World War II. US Navy controlled it by the knobs, like radio's, on its surface and side. Twisting them, you can change the data which you want to input. I'm not sure whether the inspiration comes from knob of watch on side that is used for adjusting time. Obviously, this classical design is still applied on consumer electronics in the digital era, such as Apple Watch's digital crown.
    It's simple and fluent, when we are accurately adjusting numbers and data or selecting an item on a list.

  • Punch Card
    I don't view data storage devices, such as tape, floppy and flash disk, as kinds of HCI tools, since those devices aren't directly operated to control or command computers, but computers are just controlled via other tools operated by human to read or write them. However, punch card is different, in a sense, is a tool which can directly show input/output data of computer.
    Although the earliest punched card was invented in 1700s, it become ubiquitous in computer field from 1970s. It could be read by computer punched card readers and directly recognised by computer. It has not been applied in the computer field now, but many voting systems still use it.

  • Keyboard
    A computer keyboard is a device which uses an arrangement of buttons or keys to act as mechanical levers or electronic switch. Before it was used for computer, you also can find these shape-style things on piano, typewriter and so on. The early keyboard device were used in stock market in 1870s. The development of the earliest computers incorporated electric typewriter keyboards was as a keypunch to write punched card in ENIAC in 1940s. The keyboard remained the primary, most integrated computer peripheral well into the era of personal computing until the introduction of the mouse as a consumer device in 1984. Fortunately, keyboards are widely accepted by average people, not like punch card only for engineers.
    Typewriting plays the vital role in HCI to this day. It is still the best and most efficient way of editing words. If there isn't keyboard, I hardly image that I would like writing programming code, even words. Keyboard shortcuts are so far the best and fast ways of command in a lot of situations, though people have used mouse so many years.
    To text, the pen and paper – or parchment etc. – are the unique interactive objects in the past. Human wrote copies of Bible, letters, poems, books with hands in a long time before printing invented. It is quite a hard work, for writers are very easy to fell tired and have to be careful on the scribal norm. The handwritten text consisted of words or character, like a work of art, not only reveals the information of content, but also shows its writer's mental expression. This kind of expression builds on the force, the fonts and the control of strokes of pen.

    However, the keyboard changed the all of things – text is not an object of pen and paper longer, it could only focus on the information that text should have expressed. The interaction between human and computer or human and human could become pure communication via virtual text because of keyboard. It liberates us from the heavy manual labor, and makes writing become convenient, flexible, fast and more people love to create. We can swift input a mass of words, and be easy to edit or correct them. I couldn't forget how exciting I am, when I learnt using Ctrl + C and Ctrl + V. In this age, I could write articles, code programme, create poems and post blogs everywhere with keyboard. The common people start to have the free will of using word to express themselves, breaks through the constraint from press, religious leaders, cultural authorities and so-called sages.
    Besides, keyboard produced more possibilities of control methods for computer, especially on game. Such many keys in keyboard let the games have more operational ways. Players always practise complex operations of keyboard in some RTS game, such as professional players are able to hit over 300 actions per minute (2APM) with keyboard on Starcraft II. To these game masters, keyboard has acted as important as ancient worriers' sword, and is a necessary bridge between them and computer games. It is undoubtable that the relation between human and computer has becoming close since the birth of keyboard.
    Keyboards have different shapes, connected method and design to meet various users' need. Based on switch, Mechanical, Membrane, Conductive Rubber, Capacitives provide totally different press feelings. Although QWERTY is the most popular layout, others language users could enjoy more proper layout. Moreover, for Windows, for Mac; in cable, wireless; embedded, independent; Keypadless, full-keyboard, all of them offer flexible diversity of choose for users.

    Despite as an outstanding interactive device of computer, it brought some difficulty for rookie. Due to keyboard carrying on lots of typewriter's features, the beginners have to pay some learning cost on it. Let 's recall that time when we began to learning how to typewrite with a keyboard, you might be confused by the complex keys layout and the wired names of function buttons. You might not know why the alphabet list is not in sequence, and only had to remember it and practised to type again and again. Perhaps, one day, while you are told that the strange layout aimed to avoid some mechanical problem in the era of typewriter, these reasons have not mattered, since you have mastered typing it. People consider computer with a keyboard as a machine, though the keyboard has reformed the operational methods for computer, because we need to learn, to adapt to this certain layout, to remember kinds of combinations of shortcuts. So, if you want to make friend with computer with keyboard, a typewriting lesson is your part of journey.


  • Mouse
  • Remote Controller
  • Joy Stick
  • Touchable Board

The age of smart operation

The time of brain-computer interface


electromechanical show

  • Nixie tube
  • Panel
  • LED Light
    ## Human-Human Interaction in networks

Sidecar display issue on Mac OS Catalina when True Tone or Night Shift are on

Sidecar is a fantastic feature on the newest Mac OS, Catalina. It helps us work on external display space with iPad.
I upgraded my Macbook Pro to Mac OS 10.15 and my iPad Pro to iPad OS 13.1, but there was some oddness of display colour that the window's shadow was beige not grey.

display issue
display issue

normal display
normal display

I believe many users also encounter the same problem. Then I searched it on google, and some people think this is an Intel graphic card's bug.

This issue will happen, when this three conditions are satisfied simultaneously:

  • 1. using sidecar
  • 2. true tone or night shift are on
  • 3. Intel graphic card is running

So, there is a temporary solution:

  • turn off the true tone and night shift for only Intel graphic card Macbook.
    turn off true tone

  • If your Macbook has independent graphic card or external graphic card, you can go to System Preferences -> Energy Saver to uncheck Automatic graphics switching.

However, this way would make your laptop to get warmer even hot and waste more electricity.

iOS Memory Management

iOS Memory Management

Memory Management in Objective-C

Memory management is the programming discipline of managing the life cycles of objects and freeing them when they are no longer needed. Managing object memory is a matter of performance; if an application doesn’t free unneeded objects, its memory footprint grows and performance suffers. However, garbage collection is not avaliable in iOS. iOS manages memory by reference count. Let’s learn about it.

Reference Count

If someone owes an object, that means the object is useful, thus system shouldn’t release this object. When no one need to owe it, it would dealloc. Base this norn, iOS manages memory by reference count. Every time the object adds an owner, and the reference count plus 1, vice versa. If the reference count equal 0, the object’s dealloc method should be invoked. Meanwhile, we can use these methods to change the reference count:

object operation method result of operation
create and own object alloc new copy mutablecopy create object and set reference count equal 1
own object retain reference count + 1
release object release reference count – 1
drop object dealloc when reference count equal 0, it’s invoked

We can comprehend an object’s life cycle by these method:

After the creation and initialization phase, an object remains in memory as long as its retain count is greater than zero. Other objects in the program may express an ownership interest in an object by sending it retain or by copying it, and then later relinquish that ownership interest by sending release to the object. While the object is viable, a program may begin the archiving process, in which the object encodes its state in the archive byte stream. When the object receives its final release message, its retain count drops to zero. Consequently, the object’s dealloc method is called, which frees any objects or other memory it has allocated, and the object is destroyed.

In the past, develeper need to manaully manage reference count, we call that manual retain-release (MRR), and now Apple recommands automatic reference counting (ARC) that means you don’t need to care these methods above table, when you write code. ARC can help you to automatically add memory management method, when the program compiles.

Runloop & Autorelease Pool

Runloop is a loop meshanism for managing thread. The Application Kit creates at least one NSRunloop instance for one application. The apps run in this loop after launching, as shown in the diagram below, When a touch event happens, the Cocoa Touch framework detects the event, creates an event object, then allocates and initializes an autorelease pool that is basically a NSAutoreleasePool object (If you use ARC, you cannot use autorelease pools directly. Instead, you should use @autoreleasepool block). Cocoa touch then invokes your application event handler, making the event object available.

The handler may put objects in the autorelease pool or use objects that were put into autorelease pool by other objects.

In the MRC, we can use autorelease method put a object in the autorelease pool. The autorelease method is different with release method mentioned in previous chapter. release is called immediately; decrementing retainCount by 1 and calling dealloc if it becomes zero.

Apple documents about Run Loops.

Apple documents about NSAutoreleasePool.

Retain Cycle

  • What’s the retain cycle?

    Have a look these code:

    #import <Foundation/Foundation.h>
    @class RetainCycleClassB;
    @interface RetainCycleClassA : NSObject
    @property (nonatomic, strong) RetainCycleClassB *objectB;
    #import "RetainCycleClassA.h"
    #import "RetainCycleClassB.h"
    @implementation RetainCycleClassA
    - (instancetype)init
        if (self = [super init]) {
            self.objectB = [[RetainCycleClassB alloc] initWithClazzA:self];
        return self;
    #import "RetainCycleClassA.h"
    @interface RetainCycleClassB : NSObject
    @property (nonatomic, strong) RetainCycleClassA *objectA;
    - (instancetype)initWithClazzA:(RetainCycleClassA*)objectA;
    #import "RetainCycleClassB.h"
    @implementation RetainCycleClassB
    - (instancetype)initWithClazzA:(RetainCycleClassA *)objectA
        if (self = [super init]) {
            self.objectA = objectA;
        return self;

    When you run these code, you won’t find that the objectA and objectB release. These both instances formed retain cycle.

    Retain cycle is a widespread problem of memory management. If there are two objects A and B, and they own each other, they both can’t be released, when the life cycle finish ,that will lead to memory leaks.

    Just like the first graph in below image. ObjectA’s strong pointer points ObjectB and ObjectB’s strong pointer points ObjectA, too. In ARC, strong pointer means owning and reference count + 1. This brings a problem, if you want to let ObjectA’s reference count equal 0, ObjectB have to be released and you want to let ObjectB released, ObjectA also have to be released. This makes an unsolvable cycle.

  • How to avoid retain cycle?

    Thereby Apple provides weak pointer in ARC. Weak pointer has two features:

    1. It won’t make reference count plus 1.
    2. When the object’s life cycle is done, the object will be nil.

    Look the second graph in above image. The weak pointer instead of strong pointer. Even though ObjectB just have a pointer to point ObjectA, ObjectB doesn’t own objectA and reference count doesn’t increase. So like this, the memory of them will be normally released.

  • Three circumstances of retain cycle

    • delegate

    If property delegate is declare as strong type, it will lead to retain cycle.

    @property (nonatomic, weak) id <RetainCycleDelegate> delegate;
    MyViewController *viewController = [[MyViewController alloc] init];
    viewController.delegate = self; //suppose self is id<RetainCycleDelegate>
    [self.navigationController pushViewController:viewController animated:YES];
    • block
    typedef void (^RetainCycleBlock)();
    @property (nonatomic, copy) RetainCycleBlock aBlock;
    if (self.aBlock) {

    When block copies, block will strongly point all variables inner block. This class takes the block as own property variable, and self is invoked inner block in this class. That makes a retain cycle.

    self.testObject.aBlock = ^{
        [self doSomething];

    We can use weak reference break up this cycle:

    __weak typeof(self) weakSelf = self;
    self.testObject.aBlock = ^{
        __strong typeof(weakSelf) strongSelft = weakSelf;
        [strongSelft doSomething];
    • NSTimer

    When we set the self as target for NStimer’s callback, it will make retain cycle. So we need to set the timer invalidate and set timer nil, when the timer complete task.

    - (void)dealloc {
        [self.myTimer invalidate];
        self.myTimer = nil;

To learn more about memory management in iOS