JLConsoleLog

What is JLConsoleLog?

JLConsoleLog is an awesome tool In-App to help swift developer log information in both development and production environment. You won't miss any key and useful logs about the bugs in non-debug mode. You also can integrate it in your project's backdoor toolkit, it will help you solve vital issues.

How to operate it?

JLConsoleLog supports three types of style (display mode) — Floating, Bubble and FullScreen

Floating

Bubble

FullScreen

There are four buttons on the option view of floating mode. The first button is setting button where you can clear all logs from this console, filter categories and levels. The second one is for switching between floating and fullscreen mode. If the third one is pressed, the console will be a translucent bubble that only displays warning and error count. The last one is close button.

The floating console could become translucent automatically after 5s, if you don't touch it. Additionally, you can drag floating console and bubble to anywhere to avoid disturb you.

While you tap a log cell, you can enter the detail page of log.


Performance monitor is a new function. You can invoke a monitor chart from bubble button now.

How to use it in your project?

The JLConsoleController is the console's controller opened for developers. It contains a shared instance. You could set style and logEnabled via it. While you set its style (display mode), the console will immediately show in terms of you given. Like this:

if JLConsoleController.shared.style == .Hidden {
     JLConsoleController.shared.style = .Floating //show console in floating fashion
        } else {
    JLConsoleController.shared.style = .Hidden //hide console
        }

If logEnabled is true, the console will collect log data, vice verse.

JLConsoleController.shared.logEnabled = true

A series functions are offered to log.

func JLLevelLog(level: JLConsoleLogLevel, category: JLConsoleLogCategory, hasFollowingAction:Bool = false, needPrint:Bool = false, contextData:Dictionary<String,Any> , formats: String...)

func JLVerboseLog( category: JLConsoleLogCategory, hasFollowingAction:Bool = false, needPrint:Bool = false, contextData:Dictionary<String,Any> , formats: String...) 

func JLDebugLog( category: JLConsoleLogCategory, hasFollowingAction:Bool = false, needPrint:Bool = false, contextData:Dictionary<String,Any> , formats: String...)

func JLInfoLog( category: JLConsoleLogCategory, hasFollowingAction:Bool = false, needPrint:Bool = false, contextData:Dictionary<String,Any> , formats: String...)

func JLWarningLog( category: JLConsoleLogCategory, hasFollowingAction:Bool = false, needPrint:Bool = false, contextData:Dictionary<String,Any> , formats: String...)

func JLErrorLog( category: JLConsoleLogCategory, hasFollowingAction:Bool = false, needPrint:Bool = false, contextData:Dictionary<String,Any> , formats: String...)

The JLConsoleLogLevel is an enum to sort by different levels. The numbers of warning and error are displayed on option view and bubble.

The JLConsoleLogCategory is your business category and is an alias of String. You can define your own categories met your demand, such as Video, TrackPage, Commodity Detail… If you need to filter your categories, you must register in this way in your code:

let SubPageTestLog:JLConsoleLogCategory = "com.consolelog.mybusiness" //declare a category

JLConsoleController.shared.register(newCategory: SubPageTestLog) //register it

The parameter, contextData, is a serializable Dictionary. The data will be shown on the detail page in Json.

The parameters, formats, is variadic parameters of String. The first value will be shown on the cell's title in console.

If needPrint equals true, the log information will print in your Xcode console in Debug environment.

Otherwise, JLConsoleController provides a followingAction to operate other actions when you finish logging. For example, you can send a track point log to statistics server such as Firebase in followingAction closure. Meanwhile, please don't forget to set hasFollowingAction as true while you log.

JLErrorLog(category: SubPageTestLog, hasFollowingAction: true ,needPrint: true, contextData: ["test":5], formats: "Error!",#function,String(#line))

This is an error log example.


Performance Monitor

JLConsoleLog provides a performance monitor. You can add these to turn on it.

JLConsoleController.shared.performanceMonitable = true

The git address is https://github.com/jacklandrin/JLConsoleLog

Requirement

  • iOS 12
  • Swift 5.2
  • Xcode 11

Reference

PerformanceMonitor

Simple Talk about – MVVM pattern

Personal History about MVVM

About a decade ago, I started to learn client app programming. The first framework that I met was Microsoft WPF. It had a very powerful developing approach that was MVVM. Microsoft provided a series of original syntax and toolkits to help developer implement a client app with MVVM. You don’t only write a WPF client using it, but also Sliverlight and Windows Phone app. So, a C# programmer could easily develop client in different platforms with a same framework. A new markup language, XAML, which was offered by Microsoft, was used as describing UI; C# or other .Net programming languages could be responsible of business logic code. Although Sliverlight and Windows Phone have come down nowadays, the story of MVVM is just beginning.

Silverlight

Because the ecosystem of Windows Phone didn’t survive and grow, I transferred to be an iOS developer six years ago. However, I felt that the developing environment of iOS was primitive – Objective-C is a 50-year-old language and using Xib and Storyboard to describe View layer is a puerile way compared with XAML. The Storyboard will be chaotic and often stuck, while there are a lot of pages that have complex logical relations in a project. Most my colleagues, therefore, develop UI component using programming way rather than Xib and Storyboard, but it could make that the ViewController takes charge of both View and Controller in MVC architecture and it maybe have massive codes in a class. As well as a ViewController needs to maintain the correlation between Models and Views, you must write many equal symbol in your code to keep them update, so it has become a big pool coupling much logic. The samiliar problems also exist in other platforms, which makes MVVM appear in people’s vision again.

In the following years, vue.js, Rx, flutter and SwiftUI all make effort to popularize MVVM.
Flutter

What is MVVM?

There are three layers in a MVVM architecture – View, ViewModel and Model. Most articles explain the relation of layers referencing this following diagram:
MVVM

  • View: describing the UI of components or pages. The information and animation is shown and interaction with users is offered in this layer. It’s merely binded with ViewModel, which means ViewModel is View’s unique data source – the data includes text, size, image etc. – and the users’ operations also trigger the event of ViewModel. So, View only does two things, declaring components and appointing their data source and event.

  • ViewModel: It has two important things, property and event.
    The ViewModel properties are always dynamic data, which means they are often changed. So, the setter method of property needs to be written. While a property is set, such as due to network response in Model layer, a signal will be published to an observer that is used to refresh View’s relevent data automatically.
    Besides, ViewModel can be invoked by View’s events, such as gesture, toggle, pressing button etc. These events also can change ViewModel’s properties . Meanwhile, ViewModel will update data of Model.
    Thus, Model was decoupled completely from ViewModel.

  • Model: It’s abstract model for non-UI data, though perhaps some fields are as same as ViewModel. Moreover, some foundemental data processing and business logic could be written in this layer, such as network request, database operation, communication with keep-alive server.

This architecture has many advantages for from-end development.

  • It separates the UI and business logic. Designers could directly import their works to project as View layer from some design software like Sketch, since independent View layer doesn’t demand any programming logic. After that, the programmer just need to set the data source from ViewModel in View. Designer’s and developer’s job is distincted clearly in the structure of organisition, which they only pay attention to own fields in a software engineering. In order to archieve this, Microsoft published Blend for designers in its MVVM framework, which is used to design WPF, Windows Phone’s UI, product animation like Adobe Flash and event debug the app.
    Blend

  • ViewModel helps View and Model decouple. You don’t longer pair complex relations between View and Model in a very large Controller. Modifying any layer won’t influence maginificantly each other. As well as ViewModel and Model could be reused in different project whose logic is similar, event you can easily redevelop a Mac application, if you have had a same business iOS app.

  • The observer pattern provides automatic way to bind data, which reduces much workload of development and possibility of bugs. The developer will care data and workflow more rather than how to control View.

Nevertheless, there are serval disadvantages of MVVM:

  • It’s hard to learn for beginners, especially the framework is not provided originally such as RxSwift. Although I have developed Microsoft’s MVVM framework for many years, the RxSwift document still make me boring. I think developing a framework that is easy to understand based on UIKit is a hard work. Fortunately, flutter and SwiftUI are published.

RxSwift

  • It has a bit of complexity of binding data for some project. In some small projects, it actually brings more workload in binding, and some tiny groups don’t have a lot of memebers, so the way of distinctive View and ViewModel is unnecessary.
  • The observers maintain a huge hashtable containing observable objects, which increases more memory consumption.

In conclusion, whether chosing MVVM in your project or not, is dependent on many factors. Though these backwards will impact some people’s decision, I prefer to use it because of its actractive features. Flutter and SwiftUI both are good options for client developers.

ChalkSeven

What is ChalkSeven?

ChalkSeven is a drop 7 game developed with SwiftUI. You can drop down a ball with number to a proper location into the grid. If a ball's number equals the count of continuous balls in its row or column, the ball will explode. Besides, there are two special balls, solid ball and pending ball. If a solid ball nearby a ball exploding, it'll be a pending ball; if a pending ball nearby a ball exploding, it'll be a normal ball with a number.

In every level, you have 20 opportunities to drop balls. When you finish your all balls, a row of solid balls will emerge from the bottom to thrust up the balls in the grid. If these balls could be pushed out of the grid, the game will be over.

Have fun!

The git address is https://github.com/jacklandrin/ChalkSeven

Requirement

  • iOS 13
  • Swift 5.2
  • Xcode 11

JLPinBubble Mergeable PinBubble with SwiftUI

What is JLPinBubble?

JPinBubble is a control that contains a series of pin bubbles which can be merged and divided when they are closed to each other. It could be applied to map apps as indicator of specific locations. As users pinch the screen to zoom the map in , the two overlapped bubbles will merge into one, vice versa.

The git address is https://github.com/jacklandrin/JLPinBubble

Requirement

  • iOS 13
  • Swift 5.2
  • Xcode 11

How to use it?

JLBubbleCanvas should be declared in your View's body. You can define the bubble's tap action in the initial function, and it has a viewbuilder that you can add the background view on the canvas. For example:

JLBubbleCanvas(bubbleTapAction: {bubble in
            ...tapAction
        }){
            Image("image name")
        }

JLBubbleCanvasViewModel is the view model of JLBubbleCanvas. It declares the bubbles array, bubble's size, the switch of showing number and the background image's name. In addition, serval functions that control merge and division are defined in this class. There are two functions provided for the View:

func judgetOverlap(_ scale:CGFloat)
func judgeDivided(_ scale:CGFloat)

JLBubbleViewModel is the view model of JLBubbleView that is used for showing a pin bubble. A pin bubble contains a image as bubble's background picture, a text, a num and its position information. While two bubbles merge into a new bubble, the new one's subBubble will put them into itself in case they could be restored.

SpringRadio

What is SpringRadio?

SpringRadio is an online radio player with SwiftUI. The player control's background colour could change depending on station's image. Tapping the image on player control, will go to the detail page where there are some floating Text animation. The orientations include horizontal and vertical and it can randomly display. The inspiration comes from Microsoft MP3, Zune HD.

list

The detail page:
animation

The git address is https://github.com/jacklandrin/SpringRadio

About Sound Wave of Realtime Audio

The effect of sound wave derives from AudioSpectrum. Its developer describes the details in his blogs. The essential is analysis of the audio's PCM buffer to calculate a Hanning Window graph. Thus, how to seize PCM buffer from a realtime stream audio is a question, because AVPlayer can't do that.

I modified AudioStreamer that is an audio player with AudioToolbox to implement it. This player contains a Downloader to send a HTTP request to get audio data. Then the Parser can collect these data and process them. While AVPlayNode is commanded to play the audio, the Reader will read the packets collected from Parser. After that, Reader create AVAudioPCMBuffer based on these packets, and it send the buffer to PlayerNode. The AudioEngine could callback the buffer that is playing via installTap function of its mainMixerNode.

Like this, these pieces of buffer can be put into AudioSpectrum's RealtimeAnalyzer to generate the sound wave data, and SpectrumView will render them to display the wave.

Since AudioStreamer is not originally designed for realtime audio, the Parser collects the data downloaded until request completing, which causes memory issue for online radio. So, I added a strategy, which the Reader will remove certain numbers of packets of Parser in case this problem, while the Reader consume certain account of data. Besides, while the PlayNode schedules the buffer, the function will check if there are enough packets to be read in case audio stalling.

Requirement

  • iOS 13
  • Swift 5.2
  • Xcode 11

References

I referenced these projects:

Others

You can add other online radio items from this website https://www.internet-radio.com

Architecture of Complex Live Stream Controller

Problems & Analysis

In a live stream application, the live stream studio viewController is the most intricate page where there are lots of configurable widgets of kinds of businesses. In my past experience of working, I met that there are nearly 100 widgets on one page and more than ten different types of studio pages. Hence, the code of this page will hard to maintain and sustainable develop, even cause software disaster of management, if only one viewController manages all business logic. But it's the true about our project, which more than 20,000 lines code was written in one file, before I refactored it. It was very terrible.

Indeed, the live stream page is very complex, so it need to be carefully analyse. The studio businesses could be classified according to their types:

  • A. studio for host of live video programme
  • B. live of single host for audience
  • C. live of single host for audience with shopping
  • D. live of hosts loop for audience
  • E. live of 1v1 hosts PK for audience
  • F. financial live for audience, e.g. stock live
  • G. one audience could stream and interact with host
  • H. playback video for audience
  • I. quiz online
  • J. remote claw machine
  • K. more ways…

Otherwise, there are massive functions in a studio page and some of them are decided to show or not by server configuration:

  • watching video (live streaming or playback)
  • sending comments, show comment list or barrage
  • showing hosts' information, such as id, name, rank, vip audiences, gold coins, number of audience online, etc
  • giving host presents and displaying their animation
  • tipping hosts some money
  • buying commodities that hosts publish
  • sharing video
  • recording video
  • swipe the screen to switch a live
  • browsing rank list
  • some games or activities
  • showing lite mode that only displays video without any widgets
  • rotating horizontal video (most videos are vertical display and fill phone's screen)
  • tapping screen to praise hearts

  • These studios and functions are maintained by different business groups. This diagram reveals our team's organisation structure.

But some basic and abstract functions are common to these types.

  • getting studio information from server.
  • playing video (live streaming or playback).
  • keep-alive message system, except playback.
  • detecting swiping gestures if needed.
  • the status of studio displaying, such as closing, entering, temporarily leaving.
  • rotating screen orientation if needed.
  • a docker view that shows configurable buttons at bottom, such as commenting, sharing, recording, sending presents.
  • a host information view with host's audience list.
  • a close button at the top-right corner.

These foundational works should have been done in a core business class rather than mixed with other more specific functions in one class. So, next part I'll talk about my main idea about re-design this studio page.

Core Idea

1. What is the widget?

A widget may be a button, a list, a docker, a dialog box or an animation. In some cases, it is a View providing touching and gesture; in other cases, it is a Layer. Thus, a widget should just met some common protocol and could be plugged in studio's certain position. A good design of widget API should be simple and not care developer's own implementation.

So, I defined a StudioWidgetProtocol including several basic and common methods for a widget. In this way, all widgets could be regarded as instances of id. The group members didn't need to modify too much pieces of code to adapt the new architecture, just implemented serval necessary interface methods.

You can consider a widget as a node, it could be plugged in a container view or father widget and have own children. Besides, the level as sort of priority is introduced to this interface, which could be defined as an arranged way in certain order, such as Z axis arrangement and from left to right. The frame of widget shouldn’t be considered in this interface, the business group members should operate it and implement other inner logic in widgets' own classes. The studio is merely responsible for adding them on proper levels and deals with the relations between widgets and studio.

However, I know that many colleagues prefer to use View as a widget in general situations, therefore I provided a StudioWidget class which has implemented StudioWidgetProtocol and added a bool switch, isGesturePenetrate, in case some gestures are intercepted by widget.

2. Distinct layers for Z axis


The studio could be neatly divided into five layers on Z axis based on studio's function. From bottom to top, they are:

  • backgroundView [0, 100) The backgroundView contains host's default image as studio's background.
  • displayerView [100, 200) The displayerView includes the video player control.
  • movableView [200, 300) A vertical swiping gesture could be operated on screen, which swaps the mode between lite and non-lite. The lite mode means there is nothing widget above the video player, except an orientate-screen button, a lite-mode-switch button and other a few widgets. Most widgets are added on the movableView, such as comment list, user information, activities icon.
  • liteView [300, 400) The liteView contains a lite-mode-switch button and other a few widgets.
  • floatView [400, ∞) The dialog boxes, some tip information and animation are popped over on the floatView.
    This design can make sure that different widgets and demands don't wrongly overlap and mix together. Every layer is defined a level range so that these widgets are plugged into appropriate layers according to their level.

3. How to process events of widgets and studio?

There are two solutions about event transmitting in front of me, with the unique message bus and the broadcast of peer to peer.
Let's see the advantages and disadvantages about both of them.

  • Message Bus I demand to define a hash table to buffer a message queue and transfer all events. Every body of message must have content, origin and destination. The bus is easy to be devised and the centralise module is very neat. Meanwhile the client SDKs all provide convenient kits of bus, such as notificationCenter. Nevertheless, that means every class in project could register as a message receiver as long as it knows the event name, which is not safe.
  • Broadcast This means every widget and studio has own announcer and can become as a listener itself. The announcer broadcast message to all listeners that have subscribed this event. Although there is a little bother to implement their own announcers for developers, the instances of listeners are explicit and all relations of events could build and manage in a studio controller.

Finally, I prefer to the latter one. I expanded the StudioWidgetProtocol to StudioWidgetProvider which claimed eventAnnouncer, and every studio has an eventAnnouncer as well. An eventAnnouncer hold up a hash table to collect all listeners. Once an event is triggered, the listeners' delegate methods will be called back and process messages. This is a relation of n to n. I defined a universal method to response events, void announcerOnEvent(string eventName ,id sender, List userInfo).
Of course, defining a single instance of message bus is also a good and simple way.

4. A studio controller and good inheritance

A studio controller is not a viewController. It should be retained by a viewController and maintains a studio life cycle that includes all life cycle of a viewController. A studio workflow decides this controller's life cycle, and the below flow chart shows the common workflow of most studio types.

Meanwhile, the these steps of workflow should be divided into unambiguous classes contained in a good inheritance. Every class has its clear duty.

The root class of studio controller has only one studioWidget as root view. It's like a canvas on which entire tree structure of widgets is put. Additionally, it's added in the View Controller's view so that the studio also could be set as the part of page but full screen.

The base business class claims main life cycle of studio, and adapts a lot of virtual methods for business group overloading. These methods are the ruled timing provided for developers. For example, there are some studio operation methods that define the timing of load studio, will leave studio, did leave studio. As well as the group members could flexibly consider to use which msg system or video player kits.

The touchable view implements three gestures — vertical swipe, horizontal swipe and tap for changing studio, switching lite mode and praising hearts respective. Their response events are opened in this class, which includes swiping distance, event status updating.

The core business achieves some common functions based on above classes. The Z axis layers are defined in this class so that the widgets could be added into studio in this class. Developers, otherwise, could conceal some widgets that don't met their demand, just need to put the widget's level in a concealing list.

The basic business group could inherit the core business class to add some common widgets for most studio types. Then, the other groups could accomplish own studios types based on this architecture.

Main UML of Architecture


How to abstract an universal model of widgets and comprehend the core of a studio are the key of the design. A good interface could uncouple the logic between studio and widgets. The groups just need to concentrate themselves and their business shouldn't be impacted by others logic. Besides, the studio should be a controller to manage the relations among widgets and between widgets and studio in a MVC pattern.

This architecture still has many problems such as performance and some detail of development isn't considered, for our team needed to rapidly shift to new architecture in short time. Indeed, it has much space that could be optimised.

iOS Graphics: Workflow of Graphics System

The iOS graphics series will introduce some content about principle of graphics processing and methods keeping iOS screen fluency, which include screen refresh essential, reason of display stalls, display performance optimising, openGL, AsyncDisplayKit, SwiftUI, etc.

Pixel

The vision of real world is continuous, but it's not in the virtual digital realm, as the image data is always numerable. For a screen or other display devices, the graphics are only combined by lots of tiny cells that can show RGB colour and are arranged as matrix. People will mistake that the screen are continuous, as long as the cells are enough little. We called these cells as pixel.

Let's look at the definition: A pixel is the smallest addressable element in an all points addressable display device; so it is the smallest controllable element of a picture represented on the screen. Perhaps the words are boring, so let's see some practical instances. For the same size images, have more pixels, they're clearer, since picture could show more details. Like calculous, the image are divided into very tiny parts that approach unlimited little. We use the resolution to measure the number of pixels in an image. We would think this is a mosaic if the resolution of a picture is low, but if it's very high, we would feel the picture is real.

The regular resolutions are 720p(HD), 1080p(Full HD), 4k.

How Monitors work?

Then, how the pixels are displayed on a screen one by one? This is a long story, we can talk about from CRT monitors.

There is an electron gun in a CRT monitors. The gun doesn't scan it randomly but in a designed fashion. The above figure shows the way. Firstly, it scans from left to right, when the beam reaches the right-hand side of the screen it undergoes a process aka horizontal flyback. While the beam is flying back, it is also pulled a little way down the screen. The gun keeps on repeat this process until the beam reaches the bottom-right side. While this process finishes, a frame of picture is represented on the screen. Then it flies back the initial position for next scan.

The monitors or other display devices use a hardware clock to send a series of timer signals, which is used for synchronisation between display process and video controller. The monitor will send a signal, horizontal synchronisation aka HSync, when the electron gun enter the new row and prepare to scan. After the frame has been drawn and gun has reset, before the next frame ready, the monitor will send a signal called vertical synchronisation aka VSync. In the most cases, the frequency of VSync sent is fixed.

Nowadays the LED/LCD screens still follow this principle. Like this, all pixels are drawn on the screen and keep on display. If the resolution between image and screen could match, the pixels will be displayed by point to point, which means each image pixel data could map each screen's colour pixel light. However, if not, a several of colour pixel lights would display in proportion to map one image pixel data.

The frequency of most iOS device such as iPhone and iPad is 59.97Hz, and the iPad Pro can reach even 120Hz.

Why we need GPU?

Although the CPU has helped us do much jobs on programme processing, the GPU perform better, because the GPU is good at computing simultaneously a mass of float digital operations. GPU has hundred, even thousand stream processors, which is an absolutely different design of architecture with the CPUs whose the number of processors is even less, most of them with only 6 or 8. Every stream processor is an independent calculator and just concentrates graphics computing. Graphics processing needs this feature, as it could be regarded as a massive and complex arithmetic test.

So, the CPU and the GPU need to cooperate together during a frame rendering. The CPU prepares and initialises frame data, and then write them into a shared buffer that provide for the GPU to read and shader. About synchronising the CPU and the GPU work in iOS, you can reference Apple's documentation: synchronizing_cpu_and_gpu_work

The graphics system workflow


This figure illustrate a workflow of image processing.

The CPU is in charge initialise the instances of image model and update data in a shared buffer. After that, it will encode commands that reference the buffer instance and commit these commands. At this time, the GPU could read data from the shared buffer and execute the commands. These jobs are added in a CPU queue and a GPU queue respectively to conduct in order to protect that frames could be continuously rendered. This is a producer-consumer pattern, the CPUs product data and the GPUs consume it.

Generally, the GPU puts the consequence into a frame buffer after rendering a frame. A video controller read data from this buffer line by line according to VSync, and then the picture is shown on the monitor.

However, this is the simplest model that there is only one frame buffer. The video controller have to wait until that the frame buffer completes being written by GPU. This is a big efficient problem which could lead to stalls, since it is possible that the video controller is still wait for frame buffer, while the monitor has finished scanning. Thus, graphics systems usually set double frame buffer to solve this problem.

Double frame buffer & its problem


The structure of double frame buffer adopts a swap mode to optimise efficiency. The GPU could pre-render a frame and put it into buffer1, and the video controller would first read it. While the next frame has rendered and written into buffer2, the video controller will indicate buffer2 and read it. Meanwhile, buffer1 will be erased and rewritten by GPU for a new frame. These two buffers keep on swapping states between writing and reading. Like this, the video controller doesn't need to wait.

It brings a new problem, although it improves the system. If the video controller hasn't read, which means the monitor maybe just show a part of frame image, and the GPU has submit the next frame and buffers have swapped, the video controller will draw the rest of new frame on the screen. This always causes picture tearing.

The graphics card firms usually provide a function called V-Sync (AMD call it FreeSync). You must see it on your game or system's graphics configuration, if you are a game player. This is an effectual method to avoid tearing. The GPU needs to wait until a V-Sync signal from monitor to render a new frame and update the frame buffer. Nevertheless, it wastes much computing resource and the waiting maybe make frames delay.

Reason of display stalls

In the iOS, graphic service will notify Application by CADisplayLink after getting VSync signal. At this time, the task such as initialising image data and computing layout will be added in the application's main thread, and the GPU will execute next task such as converting and rendering in its own thread. The consequence, a new frame, will write eventually into the frame buffer.

The interval of signal dependencies on the refreshing rate. After each interval the video controller will read the current data from frame buffer. If the frame is the newest and complete, it will be shown; if the CPU or the GPU haven't submit their data yet, processors will go on their jobs, main thread won't be added a new job until next valid opportunity, and the screen will keep showing the last frame, which is the reason of display stalls.

In short, whatever the CPU or the GPU, both of them spend too much time more than the interval between two VSync, the display will stuck. Thus, the application have to reduce resource consumption of the CPU and the GPU.

A Brief History of Human-Computer Interaction

Computers are like magic boxes that reveal various extraordinary features and amazingly change our lives. Nowadays, we use laptops, mobile phones and the other fresh even smarter devices every day, which makes that we find they are our essential partners and even friends. Perhaps we have been used to using them, but do you give attention to how we operate them and how the ways of interaction changed in history? In case you have no clear idea about that, this article will introduce a brief history of Human-Computer Interaction (HCI).

As a type of interactive tool, computer could be operated to input information by users, as well as could reflect and feedback to people. Interacting with computer, like human's conversation, we question to others and could always get certain response. If I ask someone what the weather is like tomorrow, he maybe answer sunny or I dunno. The interaction is also a kind of conversation whose methods don't limit in speech between human and computers. It can let computer understand what you want and what it should do. So the history of interaction is a history of methods of conversation between us and computers as well. I'll talk about how we operate computers from the birth of them to future possibilities. Meanwhile, I'll also describe the changes on computers outputting information, showing result and responding to us these decades. Finally, I'll list some changes of ways that human communicate to each other influenced by HCI and networks.

Input

The era of mechanical operation

At the age of computer beginning, it was regarded as a big, complex calculating machine. Therefore, people operated computers as common machine. While Some of these mechanical operations are obsolete, a lot of them still remained to current day.

  • Real Button/Switch
    I believe that we hardly find a digital device without a button. Although some buttons on monitors are replaced by touchable buttons, buttons are still exist. Whatever it is a mechanical button or a touchable button, we consider both of them to real button relative to the virtual button on GUI.
    To a digital device, we need at least one button to turn on or boot it. I'm not certainly to tell you the specific time when the first button or switch on computer was produced, but it must be the one of oldest methods of HCI.
    A button could offer three types of basic operation – press, long press, double press. There are also some buttons, like switches, that have two status – pushed or depressed. We could see all these ways of operation on an iPhone, such as power button in default setting – press is locking/unlocking screen, long press is popping the Siri, double press is showing the wallet.

    Of course, combinations of buttons could bring more operations, like power button plus sound louder button is taking a screenshot on iPhone.
    This operation method is very user friendly, since people can easily find out the location of button by trying to grope, and get physical pressing feedback when push it, which makes users to feel safe in mind and trust it. Even in some problems with device, real button is more reliable, for its principle is simple and hardly happens bug. Thus, we often long press the power button to restart, while our smart device doesn't work.

  • Knob
    wikipedia https://en.wikipedia.org/wiki/File:TDCfullview.jpg
    The Torpedo Computer was an early electromechanical analog computer during World War II. US Navy controlled it by the knobs, like radio's, on its surface and side. Twisting them, you can change the data which you want to input. I'm not sure whether the inspiration comes from knob of watch on side that is used for adjusting time. Obviously, this classical design is still applied on consumer electronics in the digital era, such as Apple Watch's digital crown.
    Apple
    It's simple and fluent, when we are accurately adjusting numbers and data or selecting an item on a list.

  • Punch Card
    I don't view data storage devices, such as tape, floppy and flash disk, as kinds of HCI tools, since those devices aren't directly operated to control or command computers, but computers are just controlled via other tools operated by human to read or write them. However, punch card is different, in a sense, is a tool which can directly show input/output data of computer.
    Although the earliest punched card was invented in 1700s, it become ubiquitous in computer field from 1970s. It could be read by computer punched card readers and directly recognised by computer. It has not been applied in the computer field now, but many voting systems still use it.
    wikipedia https://en.wikipedia.org/wiki/File:IBM1442.corestore.jpg

  • Keyboard
    A computer keyboard is a device which uses an arrangement of buttons or keys to act as mechanical levers or electronic switch. Before it was used for computer, you also can find these shape-style things on piano, typewriter and so on. The early keyboard device were used in stock market in 1870s. The development of the earliest computers incorporated electric typewriter keyboards was as a keypunch to write punched card in ENIAC in 1940s. The keyboard remained the primary, most integrated computer peripheral well into the era of personal computing until the introduction of the mouse as a consumer device in 1984. Fortunately, keyboards are widely accepted by average people, not like punch card only for engineers.
    Typewriting plays the vital role in HCI to this day. It is still the best and most efficient way of editing words. If there isn't keyboard, I hardly image that I would like writing programming code, even words. Keyboard shortcuts are so far the best and fast ways of command in a lot of situations, though people have used mouse so many years.
    To text, the pen and paper – or parchment etc. – are the unique interactive objects in the past. Human wrote copies of Bible, letters, poems, books with hands in a long time before printing invented. It is quite a hard work, for writers are very easy to fell tired and have to be careful on the scribal norm. The handwritten text consisted of words or character, like a work of art, not only reveals the information of content, but also shows its writer's mental expression. This kind of expression builds on the force, the fonts and the control of strokes of pen.

    However, the keyboard changed the all of things – text is not an object of pen and paper longer, it could only focus on the information that text should have expressed. The interaction between human and computer or human and human could become pure communication via virtual text because of keyboard. It liberates us from the heavy manual labor, and makes writing become convenient, flexible, fast and more people love to create. We can swift input a mass of words, and be easy to edit or correct them. I couldn't forget how exciting I am, when I learnt using Ctrl + C and Ctrl + V. In this age, I could write articles, code programme, create poems and post blogs everywhere with keyboard. The common people start to have the free will of using word to express themselves, breaks through the constraint from press, religious leaders, cultural authorities and so-called sages.
    Besides, keyboard produced more possibilities of control methods for computer, especially on game. Such many keys in keyboard let the games have more operational ways. Players always practise complex operations of keyboard in some RTS game, such as professional players are able to hit over 300 actions per minute (2APM) with keyboard on Starcraft II. To these game masters, keyboard has acted as important as ancient worriers' sword, and is a necessary bridge between them and computer games. It is undoubtable that the relation between human and computer has becoming close since the birth of keyboard.
    Keyboards have different shapes, connected method and design to meet various users' need. Based on switch, Mechanical, Membrane, Conductive Rubber, Capacitives provide totally different press feelings. Although QWERTY is the most popular layout, others language users could enjoy more proper layout. Moreover, for Windows, for Mac; in cable, wireless; embedded, independent; Keypadless, full-keyboard, all of them offer flexible diversity of choose for users.

    Despite as an outstanding interactive device of computer, it brought some difficulty for rookie. Due to keyboard carrying on lots of typewriter's features, the beginners have to pay some learning cost on it. Let 's recall that time when we began to learning how to typewrite with a keyboard, you might be confused by the complex keys layout and the wired names of function buttons. You might not know why the alphabet list is not in sequence, and only had to remember it and practised to type again and again. Perhaps, one day, while you are told that the strange layout aimed to avoid some mechanical problem in the era of typewriter, these reasons have not mattered, since you have mastered typing it. People consider computer with a keyboard as a machine, though the keyboard has reformed the operational methods for computer, because we need to learn, to adapt to this certain layout, to remember kinds of combinations of shortcuts. So, if you want to make friend with computer with keyboard, a typewriting lesson is your part of journey.

//TODO

  • Mouse
  • Remote Controller
  • Joy Stick
  • Touchable Board

The age of smart operation

The time of brain-computer interface

Output

electromechanical show

  • Nixie tube
  • Panel
  • LED Light
    ## Human-Human Interaction in networks

Talk about Aspect Oriented Programming in iOS

1. What's the Aspect Oriented Programming?

The Apsect Oriented Programming (AOP) is usually used in backend development. But it's more and more popular in clients programming recent days.

You can find out the definition in Wikipedia:

In computing, AOP is a programming paradigm that aims to increase modularity by allowing the separation of cross-cutting concerns. It does so by adding additional behavior to existing code (an advice) without modifying the code itself, instead separately specifying which code is modified via a "pointcut" specification, such as "log all function calls when the function's name begins with 'set'". This allows behaviors that are not central to the business logic (such as logging) to be added to a program without cluttering the code, core to the functionality. AOP forms a basis for aspect-oriented software development.

2. How to implement AOP in iOS?

When we develop a statistic system of client, sometimes we don't want to add any codes to the business logic. AOP is a good choice for some situation of logging. We can hook methods in which we need to make some logs. Such as logging page appear and disappear.

Method Swizzling

In iOS programming, we can implement AOP by Method Swizzling. Method Swizzling is way that replace original method with another.

Talking about some Runtime knowledge is necessary to how to explain Method Swizzling. Method in Objective-C consists of SEL(Selector) and IMP(implemention).

  • Method selectors are used to represent the name of a method at runtime. A method selector is a C string that has been registered (or “mapped“) with the Objective-C runtime. Selectors generated by the compiler are automatically mapped by the runtime when the class is loaded.

  • IMP is a pointer to the start of the function that implements the method.

Method Swizzling exchanges IMPs mapped selectors in order to replace methods. Implementing a Method Swizzling, the core C language API at runtime is:

OBJC_EXPORT void method_exchangeImplementations(Method m1, Method m2) 
 __OSX_AVAILABLE_STARTING(__MAC_10_5, __IPHONE_2_0);

Method Swizzling Example

Suppose we want to print out a log statement whenever -viewDidLoad() is called on a UIViewController. Creating a category of UIViewController, and implement its +load().

#import "UIViewController+swizzling.h"
#import @implementation UIViewController (swizzling)
 
+ (void)load {
    [super load];
    // class_getInstanceMethod() get method structure from method list
    Method fromMethod = class_getInstanceMethod([self class], @selector(viewDidLoad));
    Method toMethod = class_getInstanceMethod([self class], @selector(swizzling_ViewDidLoad));
    
    if (!class_addMethod([self class], @selector(viewDidLoad), method_getImplementation(toMethod), method_getTypeEncoding(toMethod))) {
        method_exchangeImplementations(fromMethod, toMethod);
    }
}
 
//  define method by myself, replacing original method
- (void)swizzling_ViewDidLoad {
    NSLog(@"Log something... for %@",[self class]);
    [self swizzling_ViewDidLoad];
}
@end

3. Aspects

Obviously, if we use this way to hook every method needed to add log, it will be a huge project. Maybe there is a third party libarary that we use easier. YES, Aspects is an awesome tool for AOP in Objective-C.

Aspects provides two methods for developer:

+ (id<AspectToken>)aspect_hookSelector:(SEL)selector
                      withOptions:(AspectOptions)options
                       usingBlock:(id)block
                            error:(NSError **)error;
- (id<AspectToken>)aspect_hookSelector:(SEL)selector
                      withOptions:(AspectOptions)options
                       usingBlock:(id)block
                            error:(NSError **)error;

An instance method and a class method, they have same name and same parameters. It's thread safe for all invoking, and Aspects makes some perfermance loss due to message passing of Objective-C. Otherwise I don't suggest use Aspects for methods in frequent invoked.

Example

I use the Aspects instead of complex Method Swizzling, hook -viewWillApear() in all UIViewControllers.

#import "ViewControllerLogger.h"
#import @implementation ViewControllerLogger
 
- (instancetype)init {
    if (self = [super init]) {
        [UIViewController aspect_hookSelector:@selector(viewWillAppear:) withOptions:AspectPositionAfter usingBlock:^(id<AspectInfo> aspectInfo) {
            NSLog(@"ViewController will appear:--> %@", NSStringFromClass([aspectInfo.instance class]));
        } error:NULL];
    }
}
@end

Disadvantage

There are some disadvantages at Aspects.

  • Perfermance loss

    I have memtioned that above paragraph.

  • Can't hook class method

    If you hook a class method, the block won't callback and you'll get failure said Aspects: Block signature doesn't match (null). The reason is that runtime just obtain member method list, if you want to get class method, you must get it from MetaClass of this class. You can get MetaClass by object_getClass(newClass).

    Fortunately, there is a gay coding a function for class method hooking:

    static void aspect_prepareClassAndHookSelector(NSObject *self, SEL selector, NSError **error) {
    NSCParameterAssert(selector);
    Class klass = aspect_hookClass(self, error);
    //TODO:Edit by JackYong
    Method targetMethod;
    IMP targetMethodIMP;
    if (class_isMetaClass(klass)) {
    targetMethod = class_getClassMethod(klass, selector);
    targetMethodIMP = method_getImplementation(targetMethod);
    } else {
    targetMethod = class_getInstanceMethod(klass, selector);
    targetMethodIMP = method_getImplementation(targetMethod);
    }