Architecture of Complex Live Stream Controller

Problems & Analysis

In a live stream application, the live stream studio viewController is the most intricate page where there are lots of configurable widgets of kinds of businesses. In my past experience of working, I met that there are nearly 100 widgets on one page and more than ten different types of studio pages. Hence, the code of this page will hard to maintain and sustainable develop, even cause software disaster of management, if only one viewController manages all business logic. But it's the true about our project, which more than 20,000 lines code was written in one file, before I refactored it. It was very terrible.

Indeed, the live stream page is very complex, so it need to be carefully analyse. The studio businesses could be classified according to their types:

  • A. studio for host of live video programme
  • B. live of single host for audience
  • C. live of single host for audience with shopping
  • D. live of hosts loop for audience
  • E. live of 1v1 hosts PK for audience
  • F. financial live for audience, e.g. stock live
  • G. one audience could stream and interact with host
  • H. playback video for audience
  • I. quiz online
  • J. remote claw machine
  • K. more ways…

Otherwise, there are massive functions in a studio page and some of them are decided to show or not by server configuration:

  • watching video (live streaming or playback)
  • sending comments, show comment list or barrage
  • showing hosts' information, such as id, name, rank, vip audiences, gold coins, number of audience online, etc
  • giving host presents and displaying their animation
  • tipping hosts some money
  • buying commodities that hosts publish
  • sharing video
  • recording video
  • swipe the screen to switch a live
  • browsing rank list
  • some games or activities
  • showing lite mode that only displays video without any widgets
  • rotating horizontal video (most videos are vertical display and fill phone's screen)
  • tapping screen to praise hearts

  • These studios and functions are maintained by different business groups. This diagram reveals our team's organisation structure.

But some basic and abstract functions are common to these types.

  • getting studio information from server.
  • playing video (live streaming or playback).
  • keep-alive message system, except playback.
  • detecting swiping gestures if needed.
  • the status of studio displaying, such as closing, entering, temporarily leaving.
  • rotating screen orientation if needed.
  • a docker view that shows configurable buttons at bottom, such as commenting, sharing, recording, sending presents.
  • a host information view with host's audience list.
  • a close button at the top-right corner.

These foundational works should have been done in a core business class rather than mixed with other more specific functions in one class. So, next part I'll talk about my main idea about re-design this studio page.

Core Idea

1. What is the widget?

A widget may be a button, a list, a docker, a dialog box or an animation. In some cases, it is a View providing touching and gesture; in other cases, it is a Layer. Thus, a widget should just met some common protocol and could be plugged in studio's certain position. A good design of widget API should be simple and not care developer's own implementation.

So, I defined a StudioWidgetProtocol including several basic and common methods for a widget. In this way, all widgets could be regarded as instances of id. The group members didn't need to modify too much code to adapt the new architecture, just implemented serval necessary interface methods.

You can consider a widget as a node, it could be plugged in a container view or father widget and have own children. Besides, the level as sort of priority is introduced to this interface, which could be defined as an arranged way in certain order, such as Z axis arrangement and from left to right. The frame of widget shouldn’t be considered in this interface, the business group members should operate it and implement other inner logic in widgets' own classes. The studio is merely responsible for adding them on proper levels and deals with the relations between widgets and studio.

However, I know that many colleagues prefer to use View as a widget in general situations, therefore I provided a StudioWidget class which has implemented StudioWidgetProtocol and added a bool switch, isGesturePenetrate, in case some gestures are intercepted by widget.

2. Distinct layers for Z axis


The studio could be neatly divided into five layers on Z axis based on studio's function. From bottom to top, they are:

  • backgroundView [0, 100) The backgroundView contains host's default image as studio's background.
  • displayerView [100, 200) The displayerView includes the video player control.
  • movableView [200, 300) A vertical swiping gesture could be operated on screen, which swaps the mode between lite and non-lite. The lite mode means there is nothing widget above the video player, except an orientate-screen button, a lite-mode-switch button and other a few widgets. Most widgets are added on the movableView, such as comment list, user information, activities icon.
  • liteView [300, 400) The liteView contains a lite-mode-switch button and other a few widgets.
  • floatView [400, ∞) The dialog boxes, some tip information and animation are popped over on the floatView.
    This design can make sure that different widgets and demands don't wrongly overlap and mix together. Every layer is defined a level range so that these widgets are plugged into appropriate layers according to their level.

3. How to process events of widgets and studio?

There are two solutions about event transmitting in front of me, with the unique message bus and the broadcast of peer to peer.
Let's see the advantages and disadvantages about both of them.

  • Message Bus I demand to define a hash table to buffer a message queue and transfer all events. Every body of message must have content, origin and destination. The bus is easy to be devised and the centralise module is very neat. Meanwhile the client SDKs all provide convenient kits of bus, such as notificationCenter. Nevertheless, that means every class in project could register as a message receiver as long as it knows the event name, which is not safe.
  • Broadcast This means every widget and studio has own announcer and can become as a listener itself. The announcer broadcast message to all listeners that have subscribed this event. Although there is a little bother to implement their own announcers for developers, the instances of listeners are explicit and all relations of events could build and manage in a studio controller.

Finally, I prefer to the latter one. I expanded the StudioWidgetProtocol to StudioWidgetProvider which claimed eventAnnouncer, and every studio has an eventAnnouncer as well. An eventAnnouncer hold up a hash table to collect all listeners. Once an event is triggered, the listeners' delegate methods will be called back and process messages. This is a relation of n to n. I defined a universal method to response events, void announcerOnEvent(string eventName ,id sender, List userInfo).
Of course, defining a single instance of message bus is also a good and simple way.

4. A studio controller and good inheritance

A studio controller is not a viewController. It should be retained by a viewController and maintains a studio life cycle that includes all life cycle of a viewController. A studio workflow decides this controller's life cycle, and the below flow chart shows the common workflow of most studio types.

Meanwhile, the these steps of workflow should be divided into unambiguous classes contained in a good inheritance. Every class has its clear duty.

The root class of studio controller has only one studioWidget as root view. It's like a canvas on which entire tree structure of widgets is put. Additional, it's added in the View Controller's view so that the studio also could be set as the part of page but full screen.

The base business class claims main life cycle of studio, and adapts a lot of virtual methods for business group overloading. These methods are the ruled timing provided for developers. For example, there are some studio operation methods that define the timing of load studio, will leave studio, did leave studio. As well as the group members could flexibly consider to use which msg system or video player kits.

The touchable view implements three gestures — vertical swipe, horizontal swipe and tap for changing studio, switching lite mode and praising hearts respective. Their response events are opened in this class, which includes swiping distance, event status updating.

The core business achieves some common functions based on above classes. The Z axis layers are defined in this class so that the widgets could be added into studio in this class. Developers, otherwise, could conceal some widgets that don't met their demand, just need to put the widget's level in a concealing list.

The basic business group could inherit the core business class to add some common widgets for most studio types. Then, the other groups could accomplish own studios types based on this architecture.

Main UML of Architecture


How to abstract an universal model of widgets and comprehend the core of a studio are the key of the design. A good interface could uncouple the logic between studio and widgets. The groups just need to concentrate themselves and their business shouldn't be impacted by others logic. Besides, the studio should be a controller to manage the relations among widgets and between widgets and studio in a MVC pattern.

This architecture still has many problems such as performance and some detail of development isn't considered, for our team needed to rapidly shift to new architecture in short time. Indeed, it has much space that could be optimised.

iOS Graphics: Workflow of Graphics System

The iOS graphics series will introduce some content about principle of graphics processing and methods keeping iOS screen fluency, which include screen refresh essential, reason of display stalls, display performance optimising, openGL, AsyncDisplayKit, SwiftUI, etc.

Pixel

The vision of real world is continuous, but it's not in the virtual digital realm, as the image data is always numerable. For a screen or other display devices, the graphics are only combined by lots of tiny cells that can show RGB colour and are arranged as matrix. People will mistake that the screen are continuous, as long as the cells are enough little. We called these cells as pixel.

Let's look at the definition: A pixel is the smallest addressable element in an all points addressable display device; so it is the smallest controllable element of a picture represented on the screen. Perhaps the words are boring, so let's see some practical instances. For the same size images, have more pixels, they're clearer, since picture could show more details. Like calculous, the image are divided into very tiny parts that approach unlimited little. We use the resolution to measure the number of pixels in an image. We would think this is a mosaic if the resolution of a picture is low, but if it's very high, we would feel the picture is real.

The regular resolutions are 720p(HD), 1080p(Full HD), 4k.

How Monitors work?

Then, how the pixels are displayed on a screen one by one? This is a long story, we can talk about from CRT monitors.

There is an electron gun in a CRT monitors. The gun doesn't scan it randomly but in a designed fashion. The above figure shows the way. Firstly, it scans from left to right, when the beam reaches the right-hand side of the screen it undergoes a process aka horizontal flyback. While the beam is flying back, it is also pulled a little way down the screen. The gun keeps on repeat this process until the beam reaches the bottom-right side. While this process finishes, a frame of picture is represented on the screen. Then it flies back the initial position for next scan.

The monitors or other display devices use a hardware clock to send a series of timer signals, which is used for synchronisation between display process and video controller. The monitor will send a signal, horizontal synchronisation aka HSync, when the electron gun enter the new row and prepare to scan. After the frame has been drawn and gun has reset, before the next frame ready, the monitor will send a signal called vertical synchronisation aka VSync. In the most cases, the frequency of VSync sent is fixed.

Nowadays the LED/LCD screens still follow this principle. Like this, all pixels are drawn on the screen and keep on display. If the resolution between image and screen could match, the pixels will be displayed by point to point, which means each image pixel data could map each screen's colour pixel light. However, if not, a several of colour pixel lights would display in proportion to map one image pixel data.

The frequency of most iOS device such as iPhone and iPad is 59.97Hz, and the iPad Pro can reach even 120Hz.

Why we need GPU?

Although the CPU has helped us do much jobs on programme processing, the GPU perform better, because the GPU is good at computing simultaneously a mass of float digital operations. GPU has hundred, even thousand stream processors, which is an absolutely different design of architecture with the CPUs whose the number of processors is even less, most of them with only 6 or 8. Every stream processor is an independent calculator and just concentrates graphics computing. Graphics processing needs this feature, as it could be regarded as a massive and complex arithmetic test.

So, the CPU and the GPU need to cooperate together during a frame rendering. The CPU prepares and initialises frame data, and then write them into a shared buffer that provide for the GPU to read and shader. About synchronising the CPU and the GPU work in iOS, you can reference Apple's documentation: synchronizing_cpu_and_gpu_work

The graphics system workflow


This figure illustrate a workflow of image processing.

The CPU is in charge initialise the instances of image model and update data in a shared buffer. After that, it will encode commands that reference the buffer instance and commit these commands. At this time, the GPU could read data from the shared buffer and execute the commands. These jobs are added in a CPU queue and a GPU queue respectively to conduct in order to protect that frames could be continuously rendered. This is a producer-consumer pattern, the CPUs product data and the GPUs consume it.

Generally, the GPU puts the consequence into a frame buffer after rendering a frame. A video controller read data from this buffer line by line according to VSync, and then the picture is shown on the monitor.

However, this is the simplest model that there is only one frame buffer. The video controller have to wait until that the frame buffer completes being written by GPU. This is a big efficient problem which could lead to stalls, since it is possible that the video controller is still wait for frame buffer, while the monitor has finished scanning. Thus, graphics systems usually set double frame buffer to solve this problem.

Double frame buffer & its problem


The structure of double frame buffer adopts a swap mode to optimise efficiency. The GPU could pre-render a frame and put it into buffer1, and the video controller would first read it. While the next frame has rendered and written into buffer2, the video controller will indicate buffer2 and read it. Meanwhile, buffer1 will be erased and rewritten by GPU for a new frame. These two buffers keep on swapping states between writing and reading. Like this, the video controller doesn't need to wait.

It brings a new problem, although it improves the system. If the video controller hasn't read, which means the monitor maybe just show a part of frame image, and the GPU has submit the next frame and buffers have swapped, the video controller will draw the rest of new frame on the screen. This always causes picture tearing.

The graphics card firms usually provide a function called V-Sync (AMD call it FreeSync). You must see it on your game or system's graphics configuration, if you are a game player. This is an effectual method to avoid tearing. The GPU needs to wait until a V-Sync signal from monitor to render a new frame and update the frame buffer. Nevertheless, it wastes much computing resource and the waiting maybe make frames delay.

Reason of display stalls

In the iOS, graphic service will notify Application by CADisplayLink after getting VSync signal. At this time, the task such as initialising image data and computing layout will be added in the application's main thread, and the GPU will execute next task such as converting and rendering in its own thread. The consequence, a new frame, will write eventually into the frame buffer.

The interval of signal dependencies on the refreshing rate. After each interval the video controller will read the current data from frame buffer. If the frame is the newest and complete, it will be shown; if the CPU or the GPU haven't submit their data yet, processors will go on their jobs, main thread won't be added a new job until next valid opportunity, and the screen will keep showing the last frame, which is the reason of display stalls.

In short, whatever the CPU or the GPU, both of them spend too much time more than the interval between two VSync, the display will stuck. Thus, the application have to reduce resource consumption of the CPU and the GPU.

A Brief History of Human-Computer Interaction

Computers are like magic boxes that reveal various extraordinary features and amazingly change our lives. Nowadays, we use laptops, mobile phones and the other fresh even smarter devices every day, which makes that we find they are our essential partners and even friends. Perhaps we have been used to using them, but do you give attention to how we operate them and how the ways of interaction changed in history? In case you have no clear idea about that, this article will introduce a brief history of Human-Computer Interaction (HCI).

As a type of interactive tool, computer could be operated to input information by users, as well as could reflect and feedback to people. Interacting with computer, like human's conversation, we question to others and could always get certain response. If I ask someone what the weather is like tomorrow, he maybe answer sunny or I dunno. The interaction is also a kind of conversation whose methods don't limit in speech between human and computers. It can let computer understand what you want and what it should do. So the history of interaction is a history of methods of conversation between us and computers as well. I'll talk about how we operate computers from the birth of them to future possibilities. Meanwhile, I'll also describe the changes on computers outputting information, showing result and responding to us these decades. Finally, I'll list some changes of ways that human communicate to each other influenced by HCI and networks.

Input

The era of mechanical operation

At the age of computer beginning, it was regarded as a big, complex calculating machine. Therefore, people operated computers as common machine. While Some of these mechanical operations are obsolete, a lot of them still remained to current day.

  • Real Button/Switch
    I believe that we hardly find a digital device without a button. Although some buttons on monitors are replaced by touchable buttons, buttons are still exist. Whatever it is a mechanical button or a touchable button, we consider both of them to real button relative to the virtual button on GUI.
    To a digital device, we need at least one button to turn on or boot it. I'm not certainly to tell you the specific time when the first button or switch on computer was produced, but it must be the one of oldest methods of HCI.
    A button could offer three types of basic operation – press, long press, double press. There are also some buttons, like switches, that have two status – pushed or depressed. We could see all these ways of operation on an iPhone, such as power button in default setting – press is locking/unlocking screen, long press is popping the Siri, double press is showing the wallet.

    Of course, combinations of buttons could bring more operations, like power button plus sound louder button is taking a screenshot on iPhone.
    This operation method is very user friendly, since people can easily find out the location of button by trying to grope, and get physical pressing feedback when push it, which makes users to feel safe in mind and trust it. Even in some problems with device, real button is more reliable, for its principle is simple and hardly happens bug. Thus, we often long press the power button to restart, while our smart device doesn't work.

  • Knob
    wikipedia https://en.wikipedia.org/wiki/File:TDCfullview.jpg
    The Torpedo Computer was an early electromechanical analog computer during World War II. US Navy controlled it by the knobs, like radio's, on its surface and side. Twisting them, you can change the data which you want to input. I'm not sure whether the inspiration comes from knob of watch on side that is used for adjusting time. Obviously, this classical design is still applied on consumer electronics in the digital era, such as Apple Watch's digital crown.
    Apple
    It's simple and fluent, when we are accurately adjusting numbers and data or selecting an item on a list.

  • Punch Card
    I don't view data storage devices, such as tape, floppy and flash disk, as kinds of HCI tools, since those devices aren't directly operated to control or command computers, but computers are just controlled via other tools operated by human to read or write them. However, punch card is different, in a sense, is a tool which can directly show input/output data of computer.
    Although the earliest punched card was invented in 1700s, it become ubiquitous in computer field from 1970s. It could be read by computer punched card readers and directly recognised by computer. It has not been applied in the computer field now, but many voting systems still use it.
    wikipedia https://en.wikipedia.org/wiki/File:IBM1442.corestore.jpg

  • Keyboard
    A computer keyboard is a device which uses an arrangement of buttons or keys to act as mechanical levers or electronic switch. Before it was used for computer, you also can find these shape-style things on piano, typewriter and so on. The early keyboard device were used in stock market in 1870s. The development of the earliest computers incorporated electric typewriter keyboards was as a keypunch to write punched card in ENIAC in 1940s. The keyboard remained the primary, most integrated computer peripheral well into the era of personal computing until the introduction of the mouse as a consumer device in 1984. Fortunately, keyboards are widely accepted by average people, not like punch card only for engineers.
    Typewriting plays the vital role in HCI to this day. It is still the best and most efficient way of editing words. If there isn't keyboard, I hardly image that I would like writing programming code, even words. Keyboard shortcuts are so far the best and fast ways of command in a lot of situations, though people have used mouse so many years.
    To text, the pen and paper – or parchment etc. – are the unique interactive objects in the past. Human wrote copies of Bible, letters, poems, books with hands in a long time before printing invented. It is quite a hard work, for writers are very easy to fell tired and have to be careful on the scribal norm. The handwritten text consisted of words or character, like a work of art, not only reveals the information of content, but also shows its writer's mental expression. This kind of expression builds on the force, the fonts and the control of strokes of pen.

    However, the keyboard changed the all of things – text is not an object of pen and paper longer, it could only focus on the information that text should have expressed. The interaction between human and computer or human and human could become pure communication via virtual text because of keyboard. It liberates us from the heavy manual labor, and makes writing become convenient, flexible, fast and more people love to create. We can swift input a mass of words, and be easy to edit or correct them. I couldn't forget how exciting I am, when I learnt using Ctrl + C and Ctrl + V. In this age, I could write articles, code programme, create poems and post blogs everywhere with keyboard. The common people start to have the free will of using word to express themselves, breaks through the constraint from press, religious leaders, cultural authorities and so-called sages.
    Besides, keyboard produced more possibilities of control methods for computer, especially on game. Such many keys in keyboard let the games have more operational ways. Players always practise complex operations of keyboard in some RTS game, such as professional players are able to hit over 300 actions per minute (2APM) with keyboard on Starcraft II. To these game masters, keyboard has acted as important as ancient worriers' sword, and is a necessary bridge between them and computer games. It is undoubtable that the relation between human and computer has becoming close since the birth of keyboard.
    Keyboards have different shapes, connected method and design to meet various users' need. Based on switch, Mechanical, Membrane, Conductive Rubber, Capacitives provide totally different press feelings. Although QWERTY is the most popular layout, others language users could enjoy more proper layout. Moreover, for Windows, for Mac; in cable, wireless; embedded, independent; Keypadless, full-keyboard, all of them offer flexible diversity of choose for users.

    Despite as an outstanding interactive device of computer, it brought some difficulty for rookie. Due to keyboard carrying on lots of typewriter's features, the beginners have to pay some learning cost on it. Let 's recall that time when we began to learning how to typewrite with a keyboard, you might be confused by the complex keys layout and the wired names of function buttons. You might not know why the alphabet list is not in sequence, and only had to remember it and practised to type again and again. Perhaps, one day, while you are told that the strange layout aimed to avoid some mechanical problem in the era of typewriter, these reasons have not mattered, since you have mastered typing it. People consider computer with a keyboard as a machine, though the keyboard has reformed the operational methods for computer, because we need to learn, to adapt to this certain layout, to remember kinds of combinations of shortcuts. So, if you want to make friend with computer with keyboard, a typewriting lesson is your part of journey.

//TODO

  • Mouse
  • Remote Controller
  • Joy Stick
  • Touchable Board

The age of smart operation

The time of brain-computer interface

Output

electromechanical show

  • Nixie tube
  • Panel
  • LED Light
    ## Human-Human Interaction in networks

Sidecar display issue on Mac OS Catalina when True Tone or Night Shift are on

Sidecar is a fantastic feature on the newest Mac OS, Catalina. It helps us work on external display space with iPad.
sidecar
I upgraded my Macbook Pro to Mac OS 10.15 and my iPad Pro to iPad OS 13.1, but there was some oddness of display colour that the window's shadow was beige not grey.

display issue
display issue

normal display
normal display

I believe many users also encounter the same problem. Then I searched it on google, and some people think this is an Intel graphic card's bug.

This issue will happen, when this three conditions are satisfied simultaneously:

  • 1. using sidecar
  • 2. true tone or night shift are on
  • 3. Intel graphic card is running

So, there is a temporary solution:

  • turn off the true tone and night shift for only Intel graphic card Macbook.
    turn off true tone

  • If your Macbook has independent graphic card or external graphic card, you can go to System Preferences -> Energy Saver to uncheck Automatic graphics switching.

However, this way would make your laptop to get warmer even hot and waste more electricity.

iOS Memory Management

iOS Memory Management

Memory Management in Objective-C

Memory management is the programming discipline of managing the life cycles of objects and freeing them when they are no longer needed. Managing object memory is a matter of performance; if an application doesn’t free unneeded objects, its memory footprint grows and performance suffers. However, garbage collection is not avaliable in iOS. iOS manages memory by reference count. Let’s learn about it.

Reference Count

If someone owes an object, that means the object is useful, thus system shouldn’t release this object. When no one need to owe it, it would dealloc. Base this norn, iOS manages memory by reference count. Every time the object adds an owner, and the reference count plus 1, vice versa. If the reference count equal 0, the object’s dealloc method should be invoked. Meanwhile, we can use these methods to change the reference count:

object operation method result of operation
create and own object alloc new copy mutablecopy create object and set reference count equal 1
own object retain reference count + 1
release object release reference count – 1
drop object dealloc when reference count equal 0, it’s invoked

We can comprehend an object’s life cycle by these method:

After the creation and initialization phase, an object remains in memory as long as its retain count is greater than zero. Other objects in the program may express an ownership interest in an object by sending it retain or by copying it, and then later relinquish that ownership interest by sending release to the object. While the object is viable, a program may begin the archiving process, in which the object encodes its state in the archive byte stream. When the object receives its final release message, its retain count drops to zero. Consequently, the object’s dealloc method is called, which frees any objects or other memory it has allocated, and the object is destroyed.

In the past, develeper need to manaully manage reference count, we call that manual retain-release (MRR), and now Apple recommands automatic reference counting (ARC) that means you don’t need to care these methods above table, when you write code. ARC can help you to automatically add memory management method, when the program compiles.

Runloop & Autorelease Pool

Runloop is a loop meshanism for managing thread. The Application Kit creates at least one NSRunloop instance for one application. The apps run in this loop after launching, as shown in the diagram below, When a touch event happens, the Cocoa Touch framework detects the event, creates an event object, then allocates and initializes an autorelease pool that is basically a NSAutoreleasePool object (If you use ARC, you cannot use autorelease pools directly. Instead, you should use @autoreleasepool block). Cocoa touch then invokes your application event handler, making the event object available.

The handler may put objects in the autorelease pool or use objects that were put into autorelease pool by other objects.

In the MRC, we can use autorelease method put a object in the autorelease pool. The autorelease method is different with release method mentioned in previous chapter. release is called immediately; decrementing retainCount by 1 and calling dealloc if it becomes zero.

Apple documents about Run Loops.

Apple documents about NSAutoreleasePool.

Retain Cycle

  • What’s the retain cycle?

    Have a look these code:

    #import <Foundation/Foundation.h>
    
    @class RetainCycleClassB;
    
    @interface RetainCycleClassA : NSObject
    
    @property (nonatomic, strong) RetainCycleClassB *objectB;
    
    @end
    
    --------------------------------------------------------------
    
    #import "RetainCycleClassA.h"
    #import "RetainCycleClassB.h"
    
    @implementation RetainCycleClassA
    
    - (instancetype)init
    {
        if (self = [super init]) {
            self.objectB = [[RetainCycleClassB alloc] initWithClazzA:self];
        }
        return self;
    }
    
    @end
    
    --------------------------------------------------------------
    
    #import "RetainCycleClassA.h"
    
    @interface RetainCycleClassB : NSObject
    
    @property (nonatomic, strong) RetainCycleClassA *objectA;
    
    - (instancetype)initWithClazzA:(RetainCycleClassA*)objectA;
    
    @end
    
    ---------------------------------------------------------------
    
    #import "RetainCycleClassB.h"
    
    @implementation RetainCycleClassB
    
    - (instancetype)initWithClazzA:(RetainCycleClassA *)objectA
    {
        if (self = [super init]) {
            self.objectA = objectA;
        }
        return self;
    }
    
    @end
    
    

    When you run these code, you won’t find that the objectA and objectB release. These both instances formed retain cycle.

    Retain cycle is a widespread problem of memory management. If there are two objects A and B, and they own each other, they both can’t be released, when the life cycle finish ,that will lead to memory leaks.

    Just like the first graph in below image. ObjectA’s strong pointer points ObjectB and ObjectB’s strong pointer points ObjectA, too. In ARC, strong pointer means owning and reference count + 1. This brings a problem, if you want to let ObjectA’s reference count equal 0, ObjectB have to be released and you want to let ObjectB released, ObjectA also have to be released. This makes an unsolvable cycle.

  • How to avoid retain cycle?

    Thereby Apple provides weak pointer in ARC. Weak pointer has two features:

    1. It won’t make reference count plus 1.
    2. When the object’s life cycle is done, the object will be nil.

    Look the second graph in above image. The weak pointer instead of strong pointer. Even though ObjectB just have a pointer to point ObjectA, ObjectB doesn’t own objectA and reference count doesn’t increase. So like this, the memory of them will be normally released.

  • Three circumstances of retain cycle

    • delegate

    If property delegate is declare as strong type, it will lead to retain cycle.

    @property (nonatomic, weak) id <RetainCycleDelegate> delegate;
    
    MyViewController *viewController = [[MyViewController alloc] init];
    viewController.delegate = self; //suppose self is id<RetainCycleDelegate>
    [self.navigationController pushViewController:viewController animated:YES];
    
    
    • block
    typedef void (^RetainCycleBlock)();
    @property (nonatomic, copy) RetainCycleBlock aBlock;
    if (self.aBlock) {
        self.aBlock();
    }
    
    

    When block copies, block will strongly point all variables inner block. This class takes the block as own property variable, and self is invoked inner block in this class. That makes a retain cycle.

    self.testObject.aBlock = ^{
        [self doSomething];
    };
    

    We can use weak reference break up this cycle:

    __weak typeof(self) weakSelf = self;
    self.testObject.aBlock = ^{
        __strong typeof(weakSelf) strongSelft = weakSelf;
        [strongSelft doSomething];
    };
    
    
    • NSTimer

    When we set the self as target for NStimer’s callback, it will make retain cycle. So we need to set the timer invalidate and set timer nil, when the timer complete task.

    - (void)dealloc {
        [self.myTimer invalidate];
        self.myTimer = nil;
    }
    

To learn more about memory management in iOS

The Solution of Live Streaming Quiz Mobile Client

Live Streaming Quiz(LSQ) is a very prevalent interact game from Q4 2017. Many live streaming video companies all published own quiz feature in Appstore. Today I introduce my project of LSQ.

1. The Way of Playing Quiz

Live Streaming quiz is like joining “Slumdog Millionaire” in your smart phone. There are 10 questions asked by program anchor during the game. The topics of questions are about culture, language, history, music, movie stars and etc. The player have to elect one answer from 4 options in 10 seconds. If you are lucky, select correct all questions, you will divide equally one million dollar with all winner; if else the first six questions are selected right, you will also get some coupons of in-app purchase. When you lose the game, you will become a normal live video audience. But if you have Resurrect Card, you will return back the game.

HQ

Is it an exciting game? Let me talk about how to implement a LSQ mobile client framework.

2. Technical Workflow

The technical essence of live streaming quiz is combine of live video streaming, instance message and big data.

  • Live Video Streaming

    I have told about technology of live streaming video in the past post. Besides, in LSQ we used an interesting technology for video streaming called Supplemental Enhancement Information (SEI). SEI is used for synchronization between video content and socket message. It makes streaming to transmit more information (e.g. time stamp, json structure) excluding video. You can learn more in google patent image about SEI

  • Instance Message

    LSQ depends keep alive socket to implement many features, such as instance recieving question content, question result, user’s comment and online user number. Because there is time offset between the time of client recieving question send by Technical Director and Program Anchor speaking video, video streaming is added SEI time stamp. The delay time is approximately 4s ~ 15s. When the client recieving question content, client analyze the time stamp and save question in memory. When time stamp of streaming SEI greater than the time stamp from instance message, client will display the question panel.

  • Big Data

    When users make their choice, the server asynchronic the different answers. Big data has to advanced perfermance compute the answer and correct of question and usage rate of Resurrect Card . It also need to product a report form in data dashboard for technical director and other supervisors.

3. Mobile Client Life Cycle

The left arrow means posting data or displaying panel from client; and the right arrow means client receiveing data.

This is a round game’s life cycle in client. Client program needs to instancely response based on different information from SEI and socket.

4. Mobile Client Structure Graph

The core of LSQ client structure is data management. The data manager collects kinds type of data from IM, SEI and HTTP request. And data manager is unique data communication party exclude video, that is in order to decouple UI and data.

YXQueue

An OOP and easily using job queue for iOS

YXQueue is encapsulate for NSOperation. Thread’s manager and invoker are divided by YXQueue. Using it, developers won’t focus too mach on thread management, just pay attention to how to create a job and implement delegate.

The github address is https://github.com/jacklandrin/YXQueue

Architecture

  • YXQueueDispatcher

    It’s designed as dispatcher of all YXQueues. It maintains the NSOperationQueue for all jobs.

  • YXQueueJob

    You can understand job as a model for operations. Configration of operations is set here.

  • YXQueueJobManager

    It manages operations producted by job. Cause dependencies of operation, maybe YXQueueJobManager needs to manage multioperation for one job.

  • YXQueueOperation

    It inherits from NSOperation. You can implement your operation content in - (void)executeTaskWithResultBlock:(void (^)(void))block

  • <YXQueueJobDelegate>

    It provides job’s callback of finishing, starting, canceling and progress changing.

Usage

1. Inheriting YXQueueEngine

YXQueue provides YXDownloadQueue to multithread download big file. It would be seen as a demo for thread’s manager.

Firstly, implementing a subclass for YXQueueJob. Adding necessary properties of model, and configing job’s type, appropriate class of YXQueueJobManager and YXQueueOperation. Such as YXQueueDownloadJob:

@interface YXQueueDownloadJob : YXQueueJob

@property (nonatomic, strong) NSString *downloadUrl;
@property (nonatomic, strong) NSString *targePath;

@end

config:

- (NSString *)jobTypeString
{
    return @"download";
}

+ (Class)managerClass
{
    return [YXQueueDownloadJobManager class];
}

+ (Class)operationClass
{
    return [YXQueueDownloadOperation class];
}

Subsenquence, create YXQueueDownloadOperation inheriting from YXQueueOperation. Config operationModel, resourceIdentifier(thread’s name) and appropriate class of job. YXQueueOperationModel can rule the max concurrent thread count and operation type. Implement the method - (void)executeTaskWithResultBlock:(void (^)(void))block.

- (instancetype)initWithJob:(YXQueueJob *)queueJob
{
    NSAssert([queueJob isKindOfClass:[YXQueueDownloadJob class]], @"queueJob must be YXQueueDownloadJob");
    if (self = [super initWithJob:queueJob]) {
        self.resourceIdentifier = @"com.queue.download";
        self.queuePriority = NSOperationQueuePriorityLow;
        _session = [NSURLSession sessionWithConfiguration:[NSURLSessionConfiguration defaultSessionConfiguration] delegate:self delegateQueue:[NSOperationQueue mainQueue]];
    }
    return self;
}

- (YXQueueDownloadJob *)job
{
    return (YXQueueDownloadJob*)_job;
}

- (YXQueueOperationModel *)operationModel
{
    if (!_model) {
        _model = [[YXQueueOperationModel alloc] init];
        _model.operationTypeString = @"downloadOperation";
        _model.maxConcurrentOperationCount = 5;
    }
    return _model;
}

- (void)executeTaskWithResultBlock:(void (^)(void))block
{
    __weak typeof(self) weakSelf = self;
    NSURLRequest *request = [NSURLRequest requestWithURL:[NSURL URLWithString:self.job.downloadUrl] cachePolicy:NSURLRequestReloadIgnoringLocalCacheData timeoutInterval:3600];
    NSURLSessionDownloadTask *downloadTask = [self downloadTaskWithRequest:request progress:^(NSProgress *downloadProgress) {
        weakSelf.progress = (float)downloadProgress.completedUnitCount / (float)downloadProgress.totalUnitCount;
        [weakSelf notifiProgressDidChange];
    } destination:^NSURL *(NSURL *targetPath, NSURLResponse *response) {
        return [NSURL fileURLWithPath:weakSelf.job.targePath];
    } success:^(NSURLResponse *response, NSURL *fileURL) {
        self.operationReslut = response;
        if (block) {
            block();
        }
    } failure:^(NSURLResponse *response, NSError *error) {
        weakSelf.operationError = error;
        if (block) {
            block();
        }
    }];
    downloadTask.priority = NSOperationQueuePriorityLow;
}

Finally, you can inherit a subclass YXQueueDownJobManager from YXQueueJobManager, though there isn’t any difference with superclass.

2. Creating a Job

You can create a job like this:

YXQueueDownloadJob *job = [[YXQueueDownloadJob alloc] init];
job.downloadUrl = @"https://www.exmaple.mp4";
job.targePath = targetUrl;
[job addDelegate:self];

//command the job to start.
[job doJob];

//command a non-current job to cancel.
[job doCancel];

and you can register these delegate methods to recieve change of job’s status:

/**
 job finished
 */
- (void)queueJob:(YXQueueJob*)job operationFinished:(YXQueueOperation*)operation;
/**
 job started
 */
- (void)queueJob:(YXQueueJob*)job operationDidStart:(YXQueueOperation*)operation;
/**
 job failed
 */
- (void)queueJob:(YXQueueJob*)job operationFailed:(YXQueueOperation*)operation withError:(NSError*)error;
/**
 job was cancelled
 */
- (void)queueJob:(YXQueueJob*)job operationDidCanceled:(YXQueueOperation*)operation;
/**
 the progress updated
 */
- (void)queueJob:(YXQueueJob*)job operationDidUpdateProgress:(float)progress;

Demo

YXQueueDemo is a mp4 downloader as a YXQueue’s demo provided for you. You can modify the mp4 URL to download different video, and the default max concorrent download count is 5, it’s set in YXQueueDownloadOperation‘s method operationModel.

YXStackView

UIToolbar is an useful control in UIKit. But after iOS 11 we need to add some compatible code to keep UI layout. Then I found UIStackView can be used as layout. However it doesn’t have some function like UIBarButtonSystemItemFlexibleSpace. So in order to create a container view supporting flexible space, I wrote YXStackView.

The github address is https://github.com/jacklandrin/YXStackView.

typedef enum {
    YXStackViewItemStyleCustom,
    YXStackViewItemStyleFlexibleSpace,
    YXStackViewItemStyleFlexibleItem
}YXStackViewItemStyle;

@interface YXStackViewItem : NSObject

@property (nonatomic, strong) UIView *customView;
@property (nonatomic, assign) NSUInteger index;
@property (nonatomic, assign) YXStackViewItemStyle style;

@end

The class YXStackViewItem is model of item view in container, it likes UIBarButtonItem in UIToolbar. If the YXStackViewItemStyle is YXStackViewItemStyleFlexibleSpace, the customView will be nil. You can instantiate items based your requirement and set them into the YXStackView.

typedef enum {
    YXStackViewAxisHorizontal,
    YXStackViewAxisVertical
} YXStackViewAxis;

@interface YXStackView : UIView

@property (nonatomic, assign) YXStackViewAxis axis;
@property (nonatomic, assign) CGFloat spacing;
@property (nonatomic, assign) CGFloat columnSpacing;//When isAutoFitEdge is YES, it's avilable
@property (nonatomic, assign) BOOL reverse; //When isAutoFitEdge is NO, it's avilable
@property (nonatomic, strong) NSArray<YXStackViewItem*>* items;
@property (nonatomic, assign) BOOL isAutoFitEdge; //whether item is auto resizing with View,if over view edge stackView is multiline. If it's YES,YXStackViewItem doesn't support YXStackViewItemStyleFlexibleSpace and YXStackViewItemStyleFlexibleItem.

@end

The YXStackView supports horizontal & vertical two layout orientations, and whether reverse sorted items. isAutoFitEdge can control if multiline display, if it’s YES, the columnSpacing will be avilable. When layoutSubView is invoked, the items’ layout will be recoculated.

If you just use it to instead of UIToolbar, you can write like this:

YXStackView *toolbar = [[YXStackView alloc] initWithFrame:CGRectMake(0, 200, self.view.frame.size.width, 40)];
toolbar.spacing = 10.0;
toolbar.isAutoFitEdge = NO;


[self.view addSubview:toolbar];

UIView *view1 = [[UIView alloc] initWithFrame:CGRectMake(0, 0, 200, 30)];
view1.backgroundColor = [UIColor redColor];

YXStackViewItem *item1 = [[YXStackViewItem alloc] init];
item1.style = YXStackViewItemStyleCustom;
item1.customView = view1;


YXStackViewItem *space = [[YXStackViewItem alloc] init];
space.style = YXStackViewItemStyleFlexibleSpace;


UIView *view3 = [[UIView alloc] initWithFrame:CGRectMake(0, 0, 100, 40)];
view3.backgroundColor = [UIColor purpleColor];

YXStackViewItem *item3 = [[YXStackViewItem alloc] init];
item3.style = YXStackViewItemStyleCustom;
item3.customView = view3;


[toolbar setItems:@[item1,space,item3]];

and you will get this layout:

If else UIStackView mode:

YXStackView *toolbar = [[YXStackView alloc] initWithFrame:CGRectMake(0, 200, self.view.frame.size.width, 40)];
    toolbar.spacing = 10.0;
    toolbar.isAutoFitEdge = YES;

    [self.view addSubview:toolbar];

    UIView *view1 = [[UIView alloc] initWithFrame:CGRectMake(0, 0, 200, 30)];
    view1.backgroundColor = [UIColor redColor];

    YXStackViewItem *item1 = [[YXStackViewItem alloc] init];
    item1.style = YXStackViewItemStyleCustom;
    item1.customView = view1;

    UIView *view2 = [[UIView alloc] initWithFrame:CGRectMake(0, 0, 250, 30)];
    view2.backgroundColor = [UIColor blueColor];

    YXStackViewItem *item2 = [[YXStackViewItem alloc] init];
    item2.style = YXStackViewItemStyleCustom;
    item2.customView = view2;

    UIView *view3 = [[UIView alloc] initWithFrame:CGRectMake(0, 0, 100, 40)];
    view3.backgroundColor = [UIColor purpleColor];

    YXStackViewItem *item3 = [[YXStackViewItem alloc] init];
    item3.style = YXStackViewItemStyleCustom;
    item3.customView = view3;

    UIView *view4 = [[UIView alloc] initWithFrame:CGRectMake(0, 0, 30, 30)];
    view4.backgroundColor = [UIColor greenColor];

    YXStackViewItem *item4 = [[YXStackViewItem alloc] init];
    item4.style = YXStackViewItemStyleCustom;
    item4.customView = view4;

    UIView *view5 = [[UIView alloc] initWithFrame:CGRectMake(0, 0, 20, 30)];
    view5.backgroundColor = [UIColor brownColor];

    YXStackViewItem *item5 = [[YXStackViewItem alloc] init];
    item5.style = YXStackViewItemStyleCustom;
    item5.customView = view5;

    toolbar.columnSpacing = 10;
    [toolbar setItems:@[item1, item2, item3, item4, item5]];

and you will got :

An Amazing Class: NSProxy

NSProxy is a root class in Objective-C. yes, NSObject is not unique root class. From the definition of NSProxy we can think NSProxy as a simplified NSObject. It just implements protocol. As an abstract class, the methods need to be implemented by subclass. One of them forwardInvocation: is the most key method of this class, and it can implement a part of feature of fowarding message.

Typically, proxy is used to implement delegate pattern. For example, making an animal proxy:

//AnimalProxy.h

@interface AnimalProxy : NSProxy

- (void)proxyWithAnimal:(NSObject*)anObject;

@end

@interface Bird : NSObject

- (void)fly;

@end

@interface Tiger : NSObject

- (void)eat:(NSString*)food;

@end

//AnimalProxy.m

@interface AnimalProxy()

@property (nonatomic, strong) NSObject *proxyObject;

@end

@implementation AnimalProxy

- (void)proxyWithAnimal:(NSObject *)anObject
{
    self.proxyObject = anObject;
}

- (void)forwardInvocation:(NSInvocation *)invocation
{
    if (self.proxyObject) {

        [invocation setTarget:self.proxyObject];

        if ([self.proxyObject isKindOfClass:[NSClassFromString(@"Tiger") class]]) {
            NSString *str = @"deer";
            [invocation setArgument:&str atIndex:2];
        }

        [invocation invoke];
    }
}

- (NSMethodSignature *)methodSignatureForSelector:(SEL)sel
{
    NSMethodSignature *signature = nil;
    if ([self.proxyObject methodSignatureForSelector:sel]) {
        signature = [self.proxyObject methodSignatureForSelector:sel];
    } else {
        signature = [super methodSignatureForSelector:sel];
    }

    return signature;
}

@end

@implementation Bird

- (void)fly
{
    NSLog(@"Bird flies");
}

@end

@implementation Tiger

- (void)eat:(NSString *)food
{
    NSLog(@"Tiger eats %@",food);
}

@end

When I invoked them, I counld get output like this:

//NSProxy doesn't have initialization method
AnimalProxy *proxy = [AnimalProxy alloc];     
Tiger *tiger = [[Tiger alloc] init]; 
Bird *bird = [[Bird alloc] init];

[proxy proxyWithAnimal:tiger];  
[proxy performSelector:@selector(eat:) withObject:@"zebra"];

[proxy proxyWithAnimal:bird];   
[proxy performSelector:@selector(fly)];

output:
2018-05-21 21:30:26.866892+0800 MethodDemo[3860:852618] Tiger eats deer
2018-05-21 21:30:26.867248+0800 MethodDemo[3860:852618] Bird flies

Developers can use NSProxy to finish many function, such as decoupling, AOP, method interception and etc. A smart gay called ibireme wrote a proxy to solve that NSTimer can’t dealloc with CADisplayLink. Let’s see his code from github.

@interface YYWeakProxy : NSProxy

/**
 The proxy target.
 */
@property (nullable, nonatomic, weak, readonly) id target;

/**
 Creates a new weak proxy for target.

 @param target Target object.

 @return A new proxy object.
 */
- (instancetype)initWithTarget:(id)target;

/**
 Creates a new weak proxy for target.

 @param target Target object.

 @return A new proxy object.
 */
+ (instancetype)proxyWithTarget:(id)target;

@end
@implementation YYWeakProxy

- (instancetype)initWithTarget:(id)target {
    _target = target;
    return self;
}

+ (instancetype)proxyWithTarget:(id)target {
    return [[YYWeakProxy alloc] initWithTarget:target];
}

- (id)forwardingTargetForSelector:(SEL)selector {
    return _target;
}

- (void)forwardInvocation:(NSInvocation *)invocation {
    void *null = NULL;
    [invocation setReturnValue:&null];
}

- (NSMethodSignature *)methodSignatureForSelector:(SEL)selector {
    return [NSObject instanceMethodSignatureForSelector:@selector(init)];
}

- (BOOL)respondsToSelector:(SEL)aSelector {
    return [_target respondsToSelector:aSelector];
}

- (BOOL)isEqual:(id)object {
    return [_target isEqual:object];
}

- (NSUInteger)hash {
    return [_target hash];
}

- (Class)superclass {
    return [_target superclass];
}

- (Class)class {
    return [_target class];
}

- (BOOL)isKindOfClass:(Class)aClass {
    return [_target isKindOfClass:aClass];
}

- (BOOL)isMemberOfClass:(Class)aClass {
    return [_target isMemberOfClass:aClass];
}

- (BOOL)conformsToProtocol:(Protocol *)aProtocol {
    return [_target conformsToProtocol:aProtocol];
}

- (BOOL)isProxy {
    return YES;
}

- (NSString *)description {
    return [_target description];
}

- (NSString *)debugDescription {
    return [_target debugDescription];
}

Method in Objective-C : Message Passing

Method in Objective-C : Message Passing

What is Message Passing?

At previous post I mentioned Method Swizzling, that refers some knowledge about message passing. So this post I decide to talk about it.

We know invoking an instance method called Message Passing in Objective-C. For example:

NSMutableArray *array = [NSMutableArray array];
[array addObject:@"hello world!"];

array is message reciever, addObject calls as selector that I mentioned at last post. message consists of selector and parameter.

Message passing adopt dynamic binding to decide to invoke which method. Dynamic binding means that the compiler doesn’t know which method implementation will be selected; instead the method implementation is looked up at runtime when the message is sent. So we write some code:

id num = @123;
//output 123
NSLog(@"%@", num);
//crash, error: [__NSCFNumber appendString:]: unrecognized selector sent to instance 0x8c28
[num appendString:@"Hello World"];

During compiling above code doesn’t have any problems, because id type can point all kinds of instance. NSString has a method appendString: ; compiler doesn’t sure which type instance num is, and add a new method to NSNumber is available in project running, thus it wouldn’t show error when compiler found appendString: method declaration. But in project running can’t find appendString: in NSNumber, program would be error. So this is a disadvantage of message passing, compiler can’t check method undefined.

You can learn more information about Message at Apple development website.

objc_msgSend()

Runtime provides a C language method for sending message, it’s objc_msgSend(receiver, selector, arg1, arg2, …). The first parameter is receiver that is method’s invoker, and second parameter is selector that is like method’s name, there are several arguments behind it. If you want to get C language function from your Objective-C method, there is a cool command:

clang -rewrite-objc main.m

this command can help you convert objc file extended .m to .cpp file.

So we have this code:

@interface Macbook : NSObject

@property (nonatomic, copy) NSString *type;
@property (nonatomic, assign) CGFloat price;

- (void)showPriceTag;

@end

@implementation Macbook

- (void)showPriceTag
{
    NSLog(@"This laptop type is %@, and price is $%.2f",self.type, self.price);
}

@end

int main(int argc, const char * argv[]) {
    @autoreleasepool {
        Macbook *macbookpro = [[Macbook alloc] init];
        macbookpro.type = @"Macbook Pro Retina";
        macbookpro.price = 199.9;
        [macbookpro showPriceTag];
    }
    return 0;
}

When I used clang command to convert code, I got a file called main.cpp in same folder, and I found these definitions:

#ifndef _REWRITER_typedef_Macbook
#define _REWRITER_typedef_Macbook
typedef struct objc_object Macbook;
typedef struct {} _objc_exc_Macbook;
#endif

extern "C" unsigned long OBJC_IVAR_$_Macbook$_type;
extern "C" unsigned long OBJC_IVAR_$_Macbook$_price;
struct Macbook_IMPL {
    struct NSObject_IMPL NSObject_IVARS;
    NSString *_type;
    CGFloat _price;
};


// @property (nonatomic, copy) NSString *type;
// @property (nonatomic, assign) CGFloat price;

// - (void)showPriceTag;

/* @end */


// @implementation Macbook


static void _I_Macbook_showPriceTag(Macbook * self, SEL _cmd) {
    NSLog((NSString *)&__NSConstantStringImpl__var_folders_9t_z6zhv5ys04q0bby1zkgnm14h0000gn_T_main_a83434_mi_0,((NSString *(*)(id, SEL))(void *)objc_msgSend)((id)self, sel_registerName("type")), ((CGFloat (*)(id, SEL))(void *)objc_msgSend_fpret)((id)self, sel_registerName("price")));
}


static NSString * _I_Macbook_type(Macbook * self, SEL _cmd) { return (*(NSString **)((char *)self + OBJC_IVAR_$_Macbook$_type)); }
extern "C" __declspec(dllimport) void objc_setProperty (id, SEL, long, id, bool, bool);

static void _I_Macbook_setType_(Macbook * self, SEL _cmd, NSString *type) { objc_setProperty (self, _cmd, __OFFSETOFIVAR__(struct Macbook, _type), (id)type, 0, 1); }

static CGFloat _I_Macbook_price(Macbook * self, SEL _cmd) { return (*(CGFloat *)((char *)self + OBJC_IVAR_$_Macbook$_price)); }
static void _I_Macbook_setPrice_(Macbook * self, SEL _cmd, CGFloat price) { (*(CGFloat *)((char *)self + OBJC_IVAR_$_Macbook$_price)) = price; }
// @end

int main(int argc, const char * argv[]) {
    /* @autoreleasepool */ { __AtAutoreleasePool __autoreleasepool; 
        Macbook *macbookpro = ((Macbook *(*)(id, SEL))(void *)objc_msgSend)((id)((Macbook *(*)(id, SEL))(void *)objc_msgSend)((id)objc_getClass("Macbook"), sel_registerName("alloc")), sel_registerName("init"));
        ((void (*)(id, SEL, NSString *))(void *)objc_msgSend)((id)macbookpro, sel_registerName("setType:"), (NSString *)&__NSConstantStringImpl__var_folders_9t_z6zhv5ys04q0bby1zkgnm14h0000gn_T_main_a83434_mi_1);
        ((void (*)(id, SEL, CGFloat))(void *)objc_msgSend)((id)macbookpro, sel_registerName("setPrice:"), 199.90000000000001);
        ((void (*)(id, SEL))(void *)objc_msgSend)((id)macbookpro, sel_registerName("showPriceTag"));
    }
    return 0;
}

The properties generate getter and setter method, so they are methods as well. We can see objc_msgSend like this:

((void (*)(id, SEL, CGFloat))(void *)objc_msgSend)((id)macbookpro, sel_registerName("setPrice:"), 199.90000000000001);
  • 1.register method name
  • 2.send message with an argument 199.90000000000001 to reciever macbookpro by objc_msgSend

You should learn runtime how to implement a invoking of Object-Oriented to pass message in objc_msgSend. objc_msgSend can select appropriate method to invoke according to reciever. How does it work? We need to learn structure of Class in Runtime.

struct objc_class {
    Class isa  OBJC_ISA_AVAILABILITY;

    Class super_class                                        
    const char *name                                         
    long version                                             
    long info                                                
    long instance_size                                       
    struct objc_ivar_list *ivars                             
    struct objc_method_list **methodLists                    
    struct objc_cache *cache                                 
    struct objc_protocol_list *protocols                     
}
/* Use `Class` instead of `struct objc_class *` */

/// An opaque type that represents an Objective-C class.
typedef struct objc_class *Class;

/// Represents an instance of a class.
struct objc_object {
    Class isa  OBJC_ISA_AVAILABILITY;
};

/// A pointer to an instance of a class.
typedef struct objc_object *id;

You can look for Runtime open source code in here.

The struct objc_class* contains a member variable struct objc_method_list **methodLists, and then found out definition of struct objc_method_list:

static struct /*_method_list_t*/ {
    unsigned int entsize;  // sizeof(struct _objc_method)
    unsigned int method_count;
    struct _objc_method method_list[5];
} _OBJC_$_INSTANCE_METHODS_Macbook __attribute__ ((used, section ("__DATA,__objc_const"))) = {
    sizeof(_objc_method),
    5,
    {{(struct objc_selector *)"showPriceTag", "v16@0:8", (void *)_I_Macbook_showPriceTag},
    {(struct objc_selector *)"type", "@16@0:8", (void *)_I_Macbook_type},
    {(struct objc_selector *)"setType:", "v24@0:8@16", (void *)_I_Macbook_setType_},
    {(struct objc_selector *)"price", "d16@0:8", (void *)_I_Macbook_price},
    {(struct objc_selector *)"setPrice:", "v24@0:8d16", (void *)_I_Macbook_setPrice_}}
};

struct _objc_method {
    struct objc_selector * _cmd;
    const char *method_type;
    void  *_imp;
};

The struct objc_method_list contains three memeber variable: method size, method count and the most important method list. Element of the method list is struct _objc_method, the struct consists of selector, method type and implementation pointer.

That can explain how does objc_msgSend work, runtime searches this struct objc_method_list in class of receiver, so that it can mach message’s reciever and selector. If method is found out, the _imp in struct _objc_method will be called; else runtime will go to super_class pointer by inheritance tree, find out method, jump. If the method is not find out, when reach the root of inheritance tree (normally is NSObject), a method of NSObject called doesNotRecognizeSelector: will be invoked. That is the error unrecognized selector which you often come across.

Message Forwarding

Actually there are three chances process message before doesNotRecognizeSelector: invoked. That is message forwarding. NSObject provides four method to implement message forwarding:

// 1
+(BOOL)resolveInstanceMethod:(SEL)sel{
    // A chance to add the instance method and return YES. It will then try sending the message again.
}

// 2
- (id)forwardingTargetForSelector:(SEL)aSelector{
    // Return an object that can handle the selector.
}

// 3
- (NSMethodSignature *)methodSignatureForSelector:(SEL)aSelector{
    // You need to implement this for the creation of an NSInvocation.
}

- (void)forwardInvocation:(NSInvocation *)invocation {
    // Invoke the selector on a target of your choice.
    [invocation invokeWithTarget:target];
}

This is the flow of message forwarding:

  • First of all, program will send a request to reciever’s class, if it don’t find out relevant method by inheritance tree. +resolveInstanceMethod: tries to add a method dynamically.

  • Then Runtime will ask the reciever whether there is another instance can process this unknow selector in -forwardingTargetForSelector:, when current instance can’t add method. If it can return a backup reciever, the message will be processed, else if return nil, program will go to the next step.

  • The thrid chance is forwarding a invocation. At this time, the detail of all unknow message will encapsulate as a NSInvocation object, and invoke -forwardInvocation: method. This is the last chance, if NSObject can’t process the message, doesNotRecognizeSelector: will throw exception.

You can forward a method to another target. That’s very useful if you’re trying to bridge between different frameworks. This is what happens when you call a method that’s not implemented.

You can learn more information about Message Forwarding in Apple documents