Building WebRTC for Android

ENV
Ubuntu

入门以及下载源码
https://webrtc.org/native-code/development/
https://webrtc.org/native-code/android/

gclient config --name=src https://chromium.googlesource.com/external/webrtc.git
echo "target_os = ['android']" >> .gclient
gclient sync --force
gclient runhooks --force

查看支持的参数列表

gn args --list out/Debug

设置参数

gn gen out/Debug --args='target_os="android" rtc_include_tests=false enable_nocompile_tests=true libyuv_include_tests=false'

开始编译

ninja -C out/Debug 或者 ninja -C out/Release

内存不够的时候就用 -j1 或者 -j2

需要使用项目自带的一些工具的时候需要执行

source ./build/android/envsetup.sh

可能出现的问题

/mnt/extra/WebRTC/src/third_party/android_tools/sdk//build-tools/22.0.0/aapt: error while loading shared libraries: libz.so.1: cannot open shared object file: No such file or directory
sudo apt-get install lib32z1

Hello World Android Things

物联网 IoT(Internet of things) 一个听起来高大上,但是实际上是历史悠久东西,但是随着社会/科技的发展(网络,协议,设备等等共同的发展),近些年被正式命名了。

以前开发这类的产品都需要复杂的流程,比如厂商基于某款特定的硬件,移植某个嵌入式的操作系统,然后在上面开发定制化的程序,可能需要懂些底层的东西,比如驱动程序等等,而且运行资源都相对来说很有限。

但是 Google 某一天宣布了一个叫做 Android Things 的东西,好像很多事情都变的简单些了。

这里就不介绍了,直接入门,记录怎么让第一个程序如何跑起来。

1) 硬件设备 RASPBERRY PI 3 MODEL B

我个人比较喜欢这款性价比高的硬件设备,自己买过一些开发板,这个完全不心疼 ^_^

不管是二手的,还是新的,只要型号对的,买个就好了(以前我也很纠结是买原产国还是买国产的,后来就选择买便宜的)

2) 操作系统 Android Things

https://developer.android.com/things/hardware/raspberrypi.html

下载镜像(https://developer.android.com/things/preview/download.html),烧录到 Micro SD Card 上,具体办法网上搜索(我这里旧物利用,翻出来原来 Motorola Milestone 上的一张卡)。制作完毕之后就可以插电开机(USB 供电,HDMI 视频输出,HDMI 也可以提供供电)。

开机之后的画面
at-iot-home

RASPBERRY PI 3 MODEL B 支持无线网络和有线网络,开发调试 adb 支持无线和有线

我这里使用的是 macOS

查看接入的 SD Card 挂载位置

diskutil list
sudo dd bs=1m if=iot_rpi3.img of=/dev/disk3

具体文件名和挂载位置根据实际情况修改

3) 开发程序

https://developer.android.com/things/sdk/samples.html

推出 Android Things 的意图就是物联网会爆发起来(虽然目前还不确切知道什么时候),所以开发程序必须要简单快速。最简单的看下本 Sample 就好了。

本程序和普通的 Android 程序配置上差别不大,就是新建一个标准的 Phone/Tablet 项目就好,主要在 app/build.gradle 和 AndroidManifest.xml 当中有点差别

独立编译 Skia for Android

最近想了解下 Skia 相关的东西,想利用其中的一些 API 来做做优化,所以打算独立编译一个版本试试看。

https://skia.org/user/quick/android

使用的代码版本


commit 81bdbf8bed8b739c2b65ac576e89d0258276e6dc
Author: caryclark
Date: Wed Oct 21 04:16:19 2015 -0700

编译环境

Ubuntu 14.04.2

直接按照官方说明就可以编译出来,我这里是不想去下载一遍 NDK,所以进行了点改动。


http://dl.google.com/android/ndk/android-ndk-r10e-linux-x86_64.bin

如果机器上已经安装过对应版本的 NDK,可以修改以下文件直接生成 TOOLCHAIN(这个步骤不是必须的)

/mnt/extra/skia/platform_tools/android/bin/utils/setup_toolchain.sh


function default_toolchain() {
- TOOLCHAINS=${SCRIPT_DIR}/../toolchains
+ TOOLCHAINS=/home/ubuntu/dev

ANDROID_ARCH=${ANDROID_ARCH-arm}
LLVM=3.6
@@ -50,19 +50,13 @@ function default_toolchain() {
exportVar ANDROID_TOOLCHAIN "${TOOLCHAINS}/${TOOLCHAIN}/bin"

if [ ! -d "$ANDROID_TOOLCHAIN" ]; then
- mkdir -p $TOOLCHAINS
pushd $TOOLCHAINS
- curl -o $NDK.bin https://dl.google.com/android/ndk/android-ndk-$NDK-$HOST-x86_64.bin
- chmod +x $NDK.bin
- ./$NDK.bin -y
./android-ndk-$NDK/build/tools/make-standalone-toolchain.sh \
--arch=$ANDROID_ARCH \
--llvm-version=$LLVM \
--platform=android-$API \
--install_dir=$TOOLCHAIN
cp android-ndk-$NDK/prebuilt/android-$ANDROID_ARCH/gdbserver/gdbserver $TOOLCHAIN
- rm $NDK.bin
- rm -rf android-ndk-$NDK
popd
fi

生成过一次 TOOLCHAIN 之后也可以把

export ANDROID_TOOLCHAIN=/home/ubuntu/dev/arm-r10e-14/bin
export PATH=$ANDROID_TOOLCHAIN:$PATH

手动加在到配置文件里面去(这个步骤不是必须的)


./platform_tools/android/bin/android_ninja -d nexus_5

然后就是等待编译,如果中途编译 APK 的时候却少一些特定版本的 Build Tool 的时候修改下 App 当中使用版本就好了,或者也可以去更新代码当中对应的版本
App 代码位于

/mnt/extra/skia/platform_tools/android/apps/

编译完成之后就可以在
/mnt/extra/skia/out/config/android-nexus_5/Debug
下看到 so 了

android.util.Pair 引起的崩溃

博客好久没有更新过了。
一直都觉得自己没啥时间 囧囧

创业开始一直都在负责 App 相关的工作。
早上例行看了下昨日统计,崩溃率暴涨,但是就维持在 4 个用户,一看 Android 版本,都是 4.0.4,
心想肯定尼玛有碰到了不该用的 API。


FATAL EXCEPTION: h-262 262
PID: 2610
java.lang.NullPointerException
at android.util.Pair.hashCode(Pair.java:63)
at java.lang.Object.toString(Object.java:332)
at java.lang.StringBuilder.append(StringBuilder.java:202)
at java.util.AbstractMap.toString(AbstractMap.java:448)
at java.lang.StringBuilder.append(StringBuilder.java:202)
......
at android.os.Handler.dispatchMessage(Handler.java:99)
at android.os.Looper.loop(Looper.java:137)
at cc.beckon.service.a.h.run(l:159)

看了下最新的 android.util.Pair 代码,似乎没有什么问题,然后追溯这个文件的修改历史
Screen Shot 2015-10-06 at 12.34.01 PM
对于为 Null 的值在低版本的 Pair 上确实无法处理,回过头来看,在这条案例上没有测试到就直接上了,该打。

OS X 上交叉编译在 Android 上运行的 libevent

下载官方源码 这里使用的是 libevent-2.0.21

首先看了下这几篇文章

http://blog.csdn.net/sozell/article/details/8898646
http://blog.csdn.net/cutesource/article/details/8970641
http://blog.chinaunix.net/uid-20514606-id-485808.html
注意,以下 $ANDROID_NDK 都是本机上 NDK 的路径

export ANDROID_ROOT=$ANDROID_NDK

export PATH=$PATH:$ANDROID_ROOT/toolchains/arm-linux-androideabi-4.9/prebuilt/darwin-x86_64/bin

 ./configure \
 --host=arm-linux-androideabi \
 CC=arm-linux-androideabi-gcc \
 LD=arm-linux-androideabi-ld \
 CPPFLAGS="-I$ANDROID_ROOT/platforms/android-14/arch-arm/usr/include/" \
 CFLAGS="-nostdlib" \
 LDFLAGS="-Wl,-rpath-link=$ANDROID_ROOT/platforms/android-14/arch-arm/usr/lib/ -L$ANDROID_ROOT/platforms/android-14/arch-arm/usr/lib/" \
 LIBS="-lc -lgcc -L$ANDROID_ROOT/toolchains/arm-linux-androideabi-4.9/prebuilt/darwin-x86_64/lib/gcc/arm-linux-androideabi/4.9"

ln -s $ANDROID_ROOT/platforms/android-14/arch-arm/usr/lib/crtbegin_so.o
ln -s $ANDROID_ROOT/platforms/android-14/arch-arm/usr/lib/crtend_so.o

make

还有另外一种方法,看起来比较正规点

http://stackoverflow.com/questions/11929773/compiling-the-latest-openssl-for-android

以下是编译armv7-a的方法(其它arch需要稍微调整下)

注意,以下 $ANDROID_NDK 都是本机上 NDK 的路径

export NDK=$ANDROID_NDK
$NDK/build/tools/make-standalone-toolchain.sh --platform=android-14 --toolchain=arm-linux-androideabi-4.9 --install-dir=`pwd`/android-toolchain-arm
export TOOLCHAIN_PATH=`pwd`/android-toolchain-arm/bin
export TOOL=arm-linux-androideabi
export NDK_TOOLCHAIN_BASENAME=${TOOLCHAIN_PATH}/${TOOL}
export CC=$NDK_TOOLCHAIN_BASENAME-gcc
export CXX=$NDK_TOOLCHAIN_BASENAME-g++
export LINK=${CXX}
export LD=$NDK_TOOLCHAIN_BASENAME-ld
export AR=$NDK_TOOLCHAIN_BASENAME-ar
export RANLIB=$NDK_TOOLCHAIN_BASENAME-ranlib
export STRIP=$NDK_TOOLCHAIN_BASENAME-strip
export ARCH_FLAGS="-march=armv7-a -mfloat-abi=softfp -mfpu=vfpv3-d16"
export ARCH_LINK="-march=armv7-a -Wl,--fix-cortex-a8"
export CPPFLAGS=" ${ARCH_FLAGS} -fpic -ffunction-sections -funwind-tables -fstack-protector -fno-strict-aliasing -finline-limit=64 "
export CXXFLAGS=" ${ARCH_FLAGS} -fpic -ffunction-sections -funwind-tables -fstack-protector -fno-strict-aliasing -finline-limit=64 -frtti -fexceptions "
export CFLAGS=" ${ARCH_FLAGS} -fpic -ffunction-sections -funwind-tables -fstack-protector -fno-strict-aliasing -finline-limit=64 "
export LDFLAGS=" ${ARCH_LINK} "

./configure --host=arm-linux-androideabi

就会在 .libs 下生成 so/a 档案

例子程序程序调用过程当中遇到的问题,具体完整代码参见 https://github.com/guohai/and-libevent

guohai@Hais-MacBook-Pro:~/Dev/work/idea/and-libevent/app/src/main/jni$ ndk-build V=1 -B
rm -f /Users/guohai/Dev/work/idea/and-libevent/app/src/main/libs/arm64-v8a/lib*.so /Users/guohai/Dev/work/idea/and-libevent/app/src/main/libs/armeabi/lib*.so /Users/guohai/Dev/work/idea/and-libevent/app/src/main/libs/armeabi-v7a/lib*.so /Users/guohai/Dev/work/idea/and-libevent/app/src/main/libs/armeabi-v7a-hard/lib*.so /Users/guohai/Dev/work/idea/and-libevent/app/src/main/libs/mips/lib*.so /Users/guohai/Dev/work/idea/and-libevent/app/src/main/libs/mips64/lib*.so /Users/guohai/Dev/work/idea/and-libevent/app/src/main/libs/x86/lib*.so /Users/guohai/Dev/work/idea/and-libevent/app/src/main/libs/x86_64/lib*.so
rm -f /Users/guohai/Dev/work/idea/and-libevent/app/src/main/libs/arm64-v8a/gdbserver /Users/guohai/Dev/work/idea/and-libevent/app/src/main/libs/armeabi/gdbserver /Users/guohai/Dev/work/idea/and-libevent/app/src/main/libs/armeabi-v7a/gdbserver /Users/guohai/Dev/work/idea/and-libevent/app/src/main/libs/armeabi-v7a-hard/gdbserver /Users/guohai/Dev/work/idea/and-libevent/app/src/main/libs/mips/gdbserver /Users/guohai/Dev/work/idea/and-libevent/app/src/main/libs/mips64/gdbserver /Users/guohai/Dev/work/idea/and-libevent/app/src/main/libs/x86/gdbserver /Users/guohai/Dev/work/idea/and-libevent/app/src/main/libs/x86_64/gdbserver
rm -f /Users/guohai/Dev/work/idea/and-libevent/app/src/main/libs/arm64-v8a/gdb.setup /Users/guohai/Dev/work/idea/and-libevent/app/src/main/libs/armeabi/gdb.setup /Users/guohai/Dev/work/idea/and-libevent/app/src/main/libs/armeabi-v7a/gdb.setup /Users/guohai/Dev/work/idea/and-libevent/app/src/main/libs/armeabi-v7a-hard/gdb.setup /Users/guohai/Dev/work/idea/and-libevent/app/src/main/libs/mips/gdb.setup /Users/guohai/Dev/work/idea/and-libevent/app/src/main/libs/mips64/gdb.setup /Users/guohai/Dev/work/idea/and-libevent/app/src/main/libs/x86/gdb.setup /Users/guohai/Dev/work/idea/and-libevent/app/src/main/libs/x86_64/gdb.setup
[armeabi-v7a] Compile thumb  : demo_libevent <= demo_libevent.c
/Users/guohai/Dev/android-ndk-r10c/toolchains/arm-linux-androideabi-4.6/prebuilt/darwin-x86_64/bin/arm-linux-androideabi-gcc -MMD -MP -MF /Users/guohai/Dev/work/idea/and-libevent/app/src/main/obj/local/armeabi-v7a/objs/demo_libevent/demo_libevent.o.d -fpic -ffunction-sections -funwind-tables -fstack-protector -no-canonical-prefixes -march=armv7-a -mfpu=vfpv3-d16 -mfloat-abi=softfp -mthumb -Os -g -DNDEBUG -fomit-frame-pointer -fno-strict-aliasing -finline-limit=64 -I/Users/guohai/Dev/work/idea/and-libevent/app/src/main/jni -DANDROID  -Wa,--noexecstack -Wformat -Werror=format-security    -I/Users/guohai/Dev/android-ndk-r10c/platforms/android-3/arch-arm/usr/include -c  /Users/guohai/Dev/work/idea/and-libevent/app/src/main/jni/demo_libevent.c -o /Users/guohai/Dev/work/idea/and-libevent/app/src/main/obj/local/armeabi-v7a/objs/demo_libevent/demo_libevent.o
[armeabi-v7a] Executable     : demo_libevent
/Users/guohai/Dev/android-ndk-r10c/toolchains/arm-linux-androideabi-4.6/prebuilt/darwin-x86_64/bin/arm-linux-androideabi-g++ -Wl,--gc-sections -Wl,-z,nocopyreloc --sysroot=/Users/guohai/Dev/android-ndk-r10c/platforms/android-3/arch-arm -Wl,-rpath-link=/Users/guohai/Dev/android-ndk-r10c/platforms/android-3/arch-arm/usr/lib -Wl,-rpath-link=/Users/guohai/Dev/work/idea/and-libevent/app/src/main/obj/local/armeabi-v7a /Users/guohai/Dev/work/idea/and-libevent/app/src/main/obj/local/armeabi-v7a/objs/demo_libevent/demo_libevent.o /Users/guohai/Dev/work/idea/and-libevent/app/src/main/jni/libevent.a /Users/guohai/Dev/work/idea/and-libevent/app/src/main/jni/libevent_core.a /Users/guohai/Dev/work/idea/and-libevent/app/src/main/jni/libevent_extra.a /Users/guohai/Dev/work/idea/and-libevent/app/src/main/jni/libevent_pthreads.a -lgcc -no-canonical-prefixes -march=armv7-a -Wl,--fix-cortex-a8  -Wl,--no-undefined -Wl,-z,noexecstack -Wl,-z,relro -Wl,-z,now  -L/Users/guohai/Dev/android-ndk-r10c/platforms/android-3/arch-arm/usr/lib -llog -lc -lm -o /Users/guohai/Dev/work/idea/and-libevent/app/src/main/obj/local/armeabi-v7a/demo_libevent
/Users/guohai/Dev/android-ndk-r10c/toolchains/arm-linux-androideabi-4.6/prebuilt/darwin-x86_64/bin/../lib/gcc/arm-linux-androideabi/4.6/../../../../arm-linux-androideabi/bin/ld: /Users/guohai/Dev/work/idea/and-libevent/app/src/main/jni/libevent.a(event.o): in function evthread_make_base_notifiable:event.c(.text.evthread_make_base_notifiable+0x5c): error: undefined reference to 'eventfd'
collect2: ld returned 1 exit status
make: *** [/Users/guohai/Dev/work/idea/and-libevent/app/src/main/obj/local/armeabi-v7a/demo_libevent] Error 1

eventfd 是 2.6.22 加入到内核当中的系统调用,然后默认写的 -L/Users/guohai/Dev/android-ndk-r10c/platforms/android-3/arch-arm/usr/lib 内核比较低,于是修改

APP_PLATFORM := android-14

编译通过

在模拟器上启动程序,配置端口转发,测试

guohai@Hais-MacBook-Pro:~$ adb forward tcp:9995 tcp:9995

guohai@Hais-MacBook-Pro:~$ telnet localhost 9995
Trying ::1...
telnet: connect to address ::1: Connection refused
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
Hello, World!
Connection closed by foreign host.

服务器端的反馈

root@generic:/data/data # ./demo_libevent                                        
flushed answer
^CCaught an interrupt signal; exiting cleanly in two seconds.
done
root@generic:/data/data # exit

Android 当中的坑

这里顺便记录下 Android 应用开发当中的一个一个的坑,很多时候我们都在面对这种问题,不同版本,不同厂商。。。
也许没有详尽/优雅的解决方法,但是至少问题在这里

1. SoundPool.play在 Android 4.3 当中没有办法 looping 播放
https://code.google.com/p/android/issues/detail?id=58113

2. Streaming 播放声音的时候(比如 AudioManager.MODE_IN_COMMUNICATION)无法切换外放
AudioManager.setMode(AudioManager.MODE_IN_CALL); // 切换成电话模式就可以切换
AudioManager.setSpeakerphoneOn(true);

3. 在某些机器上,比如插入耳机的时候无法切换外放

4. 写个跟 Android 编译相关的
Ant 脚本当中还是很老的 Java 1.5
参见 $ANDROID_HOME/tools/ant/build.xml

<property name="java.target" value="1.5" />
<property name="java.source" value="1.5" />

对于追求新的人来说当然太老了,Android 开发都用 Java 7,自己玩都用 Java 9 了

    [javac]   (use -source 7 or higher to enable diamond operator)
    [javac]   XXXX error: diamond operator is not supported in -source 1.5

所以如果实在要用 Ant 的话,需要手动改改,build.xml 同级目录加入 ant.properties 文件
里面内容增加

java.source=1.7
java.target=1.7

Parcelable encounteredClassNotFoundException reading a Serializable object

记录下,这个问题一直无法知道合理的原因

1065 07-09 22:25:29.927 670 918 E AmStub : java.lang.RuntimeException: Parcelable encounteredClassNotFoundException reading a Serializable object (name = xx.oo.MySerializableObject)
1066 07-09 22:25:29.927 670 918 E AmStub : at android.os.Parcel.readSerializable(Parcel.java:2148)
1067 07-09 22:25:29.927 670 918 E AmStub : at android.os.Parcel.readValue(Parcel.java:2016)
1068 07-09 22:25:29.927 670 918 E AmStub : at android.os.Parcel.readMapInternal(Parcel.java:2226)
1069 07-09 22:25:29.927 670 918 E AmStub : at android.os.Bundle.unparcel(Bundle.java:223)
1070 07-09 22:25:29.927 670 918 E AmStub : at android.os.Bundle.containsKey(Bundle.java:271)
1071 07-09 22:25:29.927 670 918 E AmStub : at android.content.Intent.hasExtra(Intent.java:4414)
1072 07-09 22:25:29.927 670 918 E AmStub : at com.android.server.am.c.a(Unknown Source)
1073 07-09 22:25:29.927 670 918 E AmStub : at com.android.server.am.AmSmartShowStub.checkStartActivity(Unknown Source)
1074 07-09 22:25:29.927 670 918 E AmStub : at com.android.server.am.ActivityManagerService.checkStartActivity(ActivityManagerService.java:3015)
1075 07-09 22:25:29.927 670 918 E AmStub : at com.android.server.am.ActivityManagerService.startActivityAsUser(ActivityManagerService.java:3224)
1076 07-09 22:25:29.927 670 918 E AmStub : at com.android.server.am.ActivityManagerService.startActivity(ActivityManagerService.java:3213)
1077 07-09 22:25:29.927 670 918 E AmStub : at android.app.ActivityManagerNative.onTransact(ActivityManagerNative.java:144)
1078 07-09 22:25:29.927 670 918 E AmStub : at com.android.server.am.ActivityManagerService.onTransact(ActivityManagerService.java:1968)
1079 07-09 22:25:29.927 670 918 E AmStub : at android.os.Binder.execTransact(Binder.java:351)
1080 07-09 22:25:29.927 670 918 E AmStub : at dalvik.system.NativeStart.run(Native Method)
1081 07-09 22:25:29.927 670 918 E AmStub : Caused by: java.lang.ClassNotFoundException: xx.oo.MySerializableObject
1082 07-09 22:25:29.927 670 918 E AmStub : at java.lang.Class.classForName(Native Method)
1083 07-09 22:25:29.927 670 918 E AmStub : at java.lang.Class.forName(Class.java:217)
1084 07-09 22:25:29.927 670 918 E AmStub : at java.io.ObjectInputStream.resolveClass(ObjectInputStream.java:2279)
1085 07-09 22:25:29.927 670 918 E AmStub : at java.io.ObjectInputStream.readNewClassDesc(ObjectInputStream.java:1638)
1084 07-09 22:25:29.927 670 918 E AmStub : at java.io.ObjectInputStream.resolveClass(ObjectInputStream.java:2279)
1085 07-09 22:25:29.927 670 918 E AmStub : at java.io.ObjectInputStream.readNewClassDesc(ObjectInputStream.java:1638)
1086 07-09 22:25:29.927 670 918 E AmStub : at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:658)
1087 07-09 22:25:29.927 670 918 E AmStub : at java.io.ObjectInputStream.readNewObject(ObjectInputStream.java:1781)
1088 07-09 22:25:29.927 670 918 E AmStub : at java.io.ObjectInputStream.readNonPrimitiveContent(ObjectInputStream.java:762)
1089 07-09 22:25:29.927 670 918 E AmStub : at java.io.ObjectInputStream.readObject(ObjectInputStream.java:1981)
1090 07-09 22:25:29.927 670 918 E AmStub : at java.io.ObjectInputStream.readObject(ObjectInputStream.java:1938)
1091 07-09 22:25:29.927 670 918 E AmStub : at android.os.Parcel.readSerializable(Parcel.java:2142)
1092 07-09 22:25:29.927 670 918 E AmStub : … 14 more
1093 07-09 22:25:29.927 670 918 E AmStub : Caused by: java.lang.NoClassDefFoundError: xx/oo/MySerializableObject
1094 07-09 22:25:29.927 670 918 E AmStub : … 24 more
1095 07-09 22:25:29.927 670 918 E AmStub : Caused by: java.lang.ClassNotFoundException: Didn’t find class “xx.oo.MySerializableObject” on path: DexPathList[[zip file “/system/framework/mediatek-op.jar”],nativeLibraryDirectories=[/vendor/lib, /system/lib]]
1096 07-09 22:25:29.927 670 918 E AmStub : at dalvik.system.BaseDexClassLoader.findClass(BaseDexClassLoader.java:53)
1097 07-09 22:25:29.927 670 918 E AmStub : at java.lang.ClassLoader.loadClass(ClassLoader.java:501)
1098 07-09 22:25:29.927 670 918 E AmStub : at java.lang.ClassLoader.loadClass(ClassLoader.java:461)
1099 07-09 22:25:29.927 670 918 E AmStub : … 24 more
1100 07-09 22:25:29.934 670 918 I ActivityManager: START u0 {flg=0x10000000 cmp=xx.oo/xx.oo.MyActivity (has extras) contextId=2722, taskId=2306 } from pid 23461

Android Kitkat SDK 打包出现 Dex error

NOTICE: 这条issue已经被官方修掉了,下载新的19.0.1版本的build-tools就可以,https://code.google.com/p/android/issues/detail?id=61710,http://developer.android.com/tools/revisions/build-tools.html

尝鲜Android Kitkat的同学很多人都遇到了下面这个错误。

[2013-11-01 16:58:07 - Dex Loader] Unable to execute dex: java.nio.BufferOverflowException. Check the Eclipse log for stack trace.
[2013-11-01 16:58:07 - Hello-Android] Conversion to Dalvik format failed: Unable to execute dex: java.nio.BufferOverflowException. Check the Eclipse log for stack trace.

java.nio.BufferOverflowException
	at java.nio.Buffer.nextPutIndex(Buffer.java:499)
	at java.nio.HeapByteBuffer.putShort(HeapByteBuffer.java:296)
	at com.android.dex.Dex$Section.writeShort(Dex.java:818)
	at com.android.dex.Dex$Section.writeTypeList(Dex.java:870)
	at com.android.dx.merge.DexMerger$3.write(DexMerger.java:437)
	at com.android.dx.merge.DexMerger$3.write(DexMerger.java:423)
	at com.android.dx.merge.DexMerger$IdMerger.mergeUnsorted(DexMerger.java:317)
	at com.android.dx.merge.DexMerger.mergeTypeLists(DexMerger.java:423)
	at com.android.dx.merge.DexMerger.mergeDexes(DexMerger.java:163)
	at com.android.dx.merge.DexMerger.merge(DexMerger.java:187)
	at com.android.dx.command.dexer.Main.mergeLibraryDexBuffers(Main.java:439)
	at com.android.dx.command.dexer.Main.runMonoDex(Main.java:287)
	at com.android.dx.command.dexer.Main.run(Main.java:230)
	at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at com.android.ide.eclipse.adt.internal.build.DexWrapper.run(DexWrapper.java:187)
	at com.android.ide.eclipse.adt.internal.build.BuildHelper.executeDx(BuildHelper.java:780)
	at com.android.ide.eclipse.adt.internal.build.builders.PostCompilerBuilder.build(PostCompilerBuilder.java:593)
	at org.eclipse.core.internal.events.BuildManager$2.run(BuildManager.java:728)
	at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42)
	at org.eclipse.core.internal.events.BuildManager.basicBuild(BuildManager.java:199)
	at org.eclipse.core.internal.events.BuildManager.basicBuild(BuildManager.java:239)
	at org.eclipse.core.internal.events.BuildManager$1.run(BuildManager.java:292)
	at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42)
	at org.eclipse.core.internal.events.BuildManager.basicBuild(BuildManager.java:295)
	at org.eclipse.core.internal.events.BuildManager.basicBuildLoop(BuildManager.java:351)
	at org.eclipse.core.internal.events.BuildManager.build(BuildManager.java:374)
	at org.eclipse.core.internal.events.AutoBuildJob.doBuild(AutoBuildJob.java:143)
	at org.eclipse.core.internal.events.AutoBuildJob.run(AutoBuildJob.java:241)
	at org.eclipse.core.internal.jobs.Worker.run(Worker.java:54)

有人第一时间在Groups上提出了这个问题
Dex issues with latest SDK
但是目前也还没有得到官方的回答,看起来应该是build-tools的影响。

我试了下,项目版本在Android 4.1之上的基本都不会有这个问题,在4.1以下的目前应该都有这个问题。

这样子似乎只能等待官方答案。。。

有人把build-tools回滚到18.1.1据说可以解决这个问题。

如果有需要大家可以按这两个方法试试~~

P.S. 亲测 Linux和Windows下都有这个问题,有人说Mac OS下没有这个问题。。。只能跪了,另外同步AOSP代码真是个痛苦的过程

SurfaceFlinger源码分析

针对Jelly Bean版本的代码。
SurfaceFlinger是什么,这些介绍大家可以在网络上找找看,这里就直接上代码。

首先我们得了解一种常用的编程做法,生产者/消费者模型,也许都会觉得很简单,但是这里就用到了很多这些基本概念。
BufferQueue 数据都queue到这里面,前提是它是先从BufferQueue取出一个空的数据单元,称为一个buffer,实际为GraphicBuffer类型。

ConsumerBase 它是消费者端使用的接口,它实现了BufferQueue::ConsumerListener接口,也就是BufferQueue当中有buffer被queue的时候,它能被通知到(onFrameAvailable)。同理当生产者disconnect与BufferQueue的连接或者setBufferCount被调用(该方法释放掉所有buffer,让buffer都归BufferQueue所有,如果有buffer处于DEQUEUED状态,此方法返回错误),它也会被通知到(onBuffersReleased)。

BufferItemConsumer和CpuConsumer 它们都是ConsumerBase的子类,BufferItemConsumer一次可以acquire多个buffer,ConsumerBase一次只能一个,BufferItemConsumer是修改了BufferQueue的mMaxAcquiredBufferCount参数,ConsumerBase使用的默认值1。CpuBuffer可以把buffer锁起来供CPU使用,它也是调用GRALLOC的方法来完成这个功能的。
FramebufferSurface ConsumerBase的子类,会把收到的数据通过HWComposer往荧幕上贴。
SurfaceTexture ConsumerBase的子类,它可以把GraphicBuffer转换成texture image,然后交给OpenGL。

SurfaceTextureLayer是一个定制化的BufferQueue,NATIVE_WINDOW_API_MEDIA/NATIVE_WINDOW_API_CAMERA过来的请求会把BufferQueue设置为异步模式。

BufferQueue当中buffer的状态,这个很简单,但是也很重要。

// BufferState represents the different states in which a buffer slot
// can be.
enum BufferState {
    // FREE indicates that the buffer is not currently being used and
    // will not be used in the future until it gets dequeued and
    // subsequently queued by the client.
    // aka "owned by BufferQueue, ready to be dequeued"
    FREE = 0,

    // DEQUEUED indicates that the buffer has been dequeued by the
    // client, but has not yet been queued or canceled. The buffer is
    // considered 'owned' by the client, and the server should not use
    // it for anything.
    //
    // Note that when in synchronous-mode (mSynchronousMode == true),
    // the buffer that's currently attached to the texture may be
    // dequeued by the client.  That means that the current buffer can
    // be in either the DEQUEUED or QUEUED state.  In asynchronous mode,
    // however, the current buffer is always in the QUEUED state.
    // aka "owned by producer, ready to be queued"
    DEQUEUED = 1,

    // QUEUED indicates that the buffer has been queued by the client,
    // and has not since been made available for the client to dequeue.
    // Attaching the buffer to the texture does NOT transition the
    // buffer away from the QUEUED state. However, in Synchronous mode
    // the current buffer may be dequeued by the client under some
    // circumstances. See the note about the current buffer in the
    // documentation for DEQUEUED.
    // aka "owned by BufferQueue, ready to be acquired"
    QUEUED = 2,

    // aka "owned by consumer, ready to be released"
    ACQUIRED = 3
};

BufferQueue主要方法
dequeueBuffer取一个buffer(返回slot,这个bufer是从State为FREE的当中取的)给client使用,必要时候(null/height/width/format/usage任何一点不满足都会触发)它会使用GraphicBufferAlloc::createGraphicBuffer()去分配buffer

requestBuffer根据一个指定的slot获取它的buffer的地址,这个主要用在刚刚分配buffer之后(或者是意外的发现指定slot的buffer地址为空),目前在SurfaceTextureClient(Surface)当中被使用到

queueBuffer通知BufferQueue压入了一个装满数据的buffer,QueueBufferInput是该buffer的描述数据,QueueBufferOutput是BufferQueue当前的状态(默认height/width/transformHint/slot的数量,这个slot只是当前被还回给BufferQueue)

acquireBuffer获取一个pending buffer的拥有权,这个buffer是mQueue当中,也就是状态为QUEUED的(有没有数据?)。

releaseBuffer放弃持有的指定slot的buffer

freeBuffer或者cancelBuffer都会导致这个buffer处于FREE状态

ConsumerBase的主要方法
acquireBufferLocked/releaseBufferLocked/freeBufferLocked/abandonLocked

另外这个protected的数组也很重要,子类可以直接从它里面获取buffer的信息,它实际就相当于缓存了BufferQueue的一些必要信息。

// mSlots stores the buffers that have been allocated by the BufferQueue
// for each buffer slot.  It is initialized to null pointers, and gets
// filled in with the result of BufferQueue::acquire when the
// client dequeues a buffer from a
// slot that has not yet been used. The buffer allocated to a slot will also
// be replaced if the requested buffer usage or geometry differs from that
// of the buffer allocated to a slot.
Slot mSlots[BufferQueue::NUM_BUFFER_SLOTS];

SurfaceTextureClient是一个ANativeWindow,为native_window_api_*和native_window_*方法(这些都在system/core/include/system/window.h当中)做具体实现,另外它还持有SurfaceTexture。

// Initialize the ANativeWindow function pointers.
ANativeWindow::setSwapInterval  = hook_setSwapInterval;
ANativeWindow::dequeueBuffer    = hook_dequeueBuffer;
ANativeWindow::cancelBuffer     = hook_cancelBuffer;
ANativeWindow::queueBuffer      = hook_queueBuffer;
ANativeWindow::query            = hook_query;
ANativeWindow::perform          = hook_perform;

ANativeWindow::dequeueBuffer_DEPRECATED = hook_dequeueBuffer_DEPRECATED;
ANativeWindow::cancelBuffer_DEPRECATED  = hook_cancelBuffer_DEPRECATED;
ANativeWindow::lockBuffer_DEPRECATED    = hook_lockBuffer_DEPRECATED;
ANativeWindow::queueBuffer_DEPRECATED   = hook_queueBuffer_DEPRECATED;

const_cast<int&>(ANativeWindow::minSwapInterval) = 0;
const_cast<int&>(ANativeWindow::maxSwapInterval) = 1;

一些重要的命名改动

早期的Jelly Bean当中,比如(4.1/4.2)                  4.3
================================================================================
SurfaceTextureClient和Surface(继承                 被简化成了Surface(ANativeWindow)
自SurfaceTextureClient)实际就是一个
ANativeWindow

================================================================================
ISurfaceTexture                                   IGraphicBufferProducer,
                                                  Binder IPC接口,用来在不同组件之间
                                                  传输数据使用(跨进程的),BufferQueue
                                                  实现了BnGraphicBufferProducer

================================================================================
SurfaceTexture(ConsumerBase)                      GLConsumer(ConsumerBase)它取
                                                  BufferQueue里面的数据,然后作为一个
                                                  texture提供给OpenGL使用
                                                  

上面是会用到的基本知识,下面基本才直接和SurfaceFlinger相关。
箭头的方向为继承的方向

                             BpSurface       ---->>>>      ISurface
                                                           sp<ISurfaceTexture> ISurface::getSurfaceTexture()

BSurface       ---->>>>      BnSurface       ---->>>>      ISurface
sp<ISurfaceTexture> BSurface::getSurfaceTexture()
        SurfaceTexture::getBufferQueue()
Layer       ---->>>>      LayerBaseClient       ---->>>>       LayerBase
sp<ISurface> Layer::createSurface()
        new BSurface

                          sp<ISurface> LayerBaseClient::getSurface()
                                  sp<ISurface> LayerBaseClient::createSurface() 
                           BpSurfaceComposerClient       ---->>>>      ISurfaceComposerClient
                                                                       sp<ISurface> ISurfaceComposerClient::createSurface()

Client       ---->>>>      BnSurfaceComposerClient       ---->>>>      ISurfaceComposerClient
Client::createSurface()
        SurfaceFlinger::createLayer()
                createXXXLayer()
                        new LayerXXX
                Layer::getSurface()
                	Layer::createSurface()
sp<SurfaceControl> SurfaceComposerClient::createSurface()
              ISurfaceComposerClient::createSurface()
              new SurfaceControl(ISurface)
// SurfaceComposerClient只是个普通的工具类,它的createSurface会去调用ISurfaceComposerClient和createSurface

现在来看一种情况,假设客户端要创建一个SurfaceView,这中间会发生什么样的事情。
当然你先得了解在Java层当中SurfaceView/SurfaceHolder/Surface这三者是什么关系。

=================================Java=====================================================
new SurfaceView
	surface = new Surface // 这个是SurfaceView当中的Surface(这都是空的,不会在服务端真正的去创建一个Surface)
	newSurface = new Surface // 这个是新的Surface,当Surface改变/被创建/被销毁/需要重绘,
							 // 都会是现在系统层准备好,然后再复制来替代我们SurfaceView当
							 // 中的原来的Surface(通过transferFrom完成)

真正创建Surface的方法是系统去调用的,app不会直接去调用,但是一旦被调用之后就会进入到JNI层相应方法之中,
会用到一个SurfaceSession,书面解释是表示到Surface Flinger的一次会话,因为客户端要同服务端沟通,就存在这样一个会话的概念,这个实际就是Native层SurfaceComposerClient的一个实例。

=================================JNI&Native========================================
android_view_Surface.cpp nativeCreate()
	android_view_SurfaceSession_getClient
	SurfaceComposerClient->createSurface
		ISurfaceComposerClient->createSurface // IPC
			Client->createSurface
				SurfaceFlinger->createLayer
					createXXXLayer()
						new LayerXXX
					Layer->getSurface()
		new SurfaceControl // SurfaceControl包含创建出来的ISurface
	setSurfaceControl // 保存到JNI Context当中

这样Isurface就创建好了

再来看另外一路发生了什么事情,Window/View System需要初始化整个Window,这样在SurfaceView当中一些callback(比如resize/new-surface/onWindowVisibilityChanged/setVisibility/onDetachedFromWindow)就会被调用到,这个时候最终会去调用updateWindow,然后IWindowSession.relayout之后就会有新的Surface被产生出来,然后通过Surface.transferFrom复制到SurfaceView的Surface当中。

还有一点注意的地方Java层的Surface(Surface.java)是如何转化为Native层的Surface(Surface.h|cpp,也就是SurfaceTextureClient)的,注意Surface.java持有一个名为mNativeSurface的Surface.h|cpp的指针,然后每次新创建Native层的Surface之后,就会把它保存到JNI Context当中,然后Java/Native就是通过这么来转换的。

接着我们就只看Native层Surface的管理,android_view_Surface.h|cpp当中有这么个方法android_view_Surface_getNativeWindow
而它又去调用一个内部方法getSurface,如下:

static sp<Surface> getSurface(JNIEnv* env, jobject surfaceObj) {
    sp<Surface> result(android_view_Surface_getSurface(env, surfaceObj)); // 如果取出来为空
    if (result == NULL) {
        /*
         * if this method is called from the WindowManager's process, it means
         * the client is is not remote, and therefore is allowed to have
         * a Surface (data), so we create it here.
         * If we don't have a SurfaceControl, it means we're in a different
         * process.
         */

        SurfaceControl* const control = reinterpret_cast<SurfaceControl*>(
                env->GetIntField(surfaceObj, gSurfaceClassInfo.mNativeSurfaceControl));
        if (control) {
            result = control->getSurface(); // 创建Surface(SurfaceTextureClient)
            if (result != NULL) {
                result->incStrong(surfaceObj);
                env->SetIntField(surfaceObj, gSurfaceClassInfo.mNativeSurface, // Native关联变量,gui/Surface.h
                        reinterpret_cast<jint>(result.get()));
            }
        }
    }
    return result;
}

sp<ANativeWindow> android_view_Surface_getNativeWindow(JNIEnv* env, jobject surfaceObj) { // 这是供Native Activity使用的
    return getSurface(env, surfaceObj);
}

看似这就是创建Surface的地方,实则不然,这是供Native Activity使用。我们普通的Java Activity是createFromParcel。
创建的过程当中会初始化ISurface变量,这个是从SurfaceFlinger的Layer的创建的,另外也会通过ISurface->getSurfaceTexture()取得BufferQueue,这样(Surface)SurfaceTextureClient和BufferQueue也就建立起了联系,也就能通过native_window_*或者ANativeWindow往BufferQueue里压入数据。

举个Camera的例子,我们知道在HAL当中每个Stream创建的时候都会有一个camera2_stream_ops参数传进去,并且在Stream的callback当中都会调用camera2_stream_ops->enqueue_buffer,然后调用到ANativeWindow->queueBuffer,最终会调用到BufferQueue的方法,所以你看如果我们喂给Camera HAL的ANativeWindow是SurfaceFlinger当中创建的话,那么Stream的数据就会回到SurfaceFlinger当中,SurfaceFlinger对需要的Layer的数据进行merge之后就可以给FB显示出来了,这就是Camera preview的原理。

SurfaceFlinger内部比较重要的一些功能或者类分析:
我们知道在Jelly Bean当中有黄油计划,主要就是引入VSYNC, Triple Buffer这些东西,Triple Buffer在Layer.h|cpp当中有提到。
那VSYNC是什么东西,简单来说就是一个固定频率的时钟,通常由显示器硬件来提供,如果硬件没有提供,那Android这里自己会模拟一个,参见HWComposer.h|cpp当中的VSyncThread这个类,实现也是非常简洁明了,自己看看代码就能明白。其实VSYNC/Triple Buffer这些东西在PC领域已经是应用多年的老技术了,感兴趣的可以自己搜索看看。

那简单的理解来看,硬件实现就是我们有注册一个callback给硬件,当有VSYNC过来的话就会被调用,当然最终会被调用到onVSyncReceived这个方法,那软件方式就是利用时钟了,每间隔固定的时间就调用onVSyncReceived。
另外还有个和VSYNC没有关系,但是却在这里出现的一个就是onHotplugReceived,就是你的外接或者虚拟显示器被拔掉或者接上会发生的事件,这里的话拔掉会导致从硬件VSYNC切换回软件方式,接上的话又会从软件切换回硬件的方式,总之这里优先使用硬件方式。

IDisplayEventConnection是客户端用来和SurfaceFlinger做VSYNC沟通的通道,利用Binder实现,比如setVsyncRate/requestNextVsync/getDataChannel这些方法,从字面意思就比较容易理解出这几个方法的含义,set就设置VSYNC事件被通知的频率,request就是手动请求一次VSYNC事件,data channel就是获取数据传递的通道,这里是BitTube实现,它是一个利用Socket实现的跨进程通信的管道,并且你可以在它上面注册感兴趣的事件,当事件到来时候,它通知你(利用epoll实现)。
所以每个客户端可以选择自己要的VYSNC的事件的频率,然后就收听事件通知就可以了,Java层的Choreographer就是利用这个实现的。我们要指导这个IDisplayEventConnection是可以有多个的,比如View系统或者Animation系统都用到这个,比如你自己写的App如果不用系统View/Animation相关的,你也可以自己利用Choreographer来注册。
那现在SurfaceFlinger是如何管理这些事件请求或者监听通知的呢?
通过EventThread,这是一个普通的Thread,客户端每调用一次SurfaceFlinger的createDisplayEventConnection就会创建一个Connection,随后被加入到EventThreade当中的mDisplayEventConnections,并触发这个线程的threadLoop来执行(没有事件需要执行的时候,该线程是睡眠状态,因为waitForEvent方法里面有wait),最后将结果通过postEvent提交给BitTupe,这样之前有在上面注册事件监听的就会收到对应的事件。
详细的代码分析请参见(https://github.com/guohai/and-notes/tree/master/surfaceflinger-jb-4.2)中文注释/可能也有少部分是我添加的英文注释。

杂项:
另外FrameBufferNativeWindow已经不再被使用了。

我们通常说在新的支持硬件加速的设备和系统上,我们倾向于使用TextureView来替代SurfaceView,这里面又是什么原因呢?
都知道SurfaceView会单独创建一个Surface,在SurfaceFlinger当中的体现也是多创建一个Layer,然后与原有的,比如Window/Status Bar等等这些Layer合并之后再在display上画出来。
那使用TextureView就不会有这么一个过程吗?是的,因为TextureView里面利用了SurfaceTexture,SurfaceTexture的创建不会导致SurfaceFlinger中多出来一个Layer,因为它是使用硬件来做的,所以TextureView必须是支持硬件加速,并且开启的情况下才能使用,否则它什么也做不了。但是它还是会创建一个Layer,只不过这个Layer是硬件来创建,管理,那软件层面就不用花这个功来做这件事情。Native层的SurfaceTexture(ConsumerBase)它负责接收过来的数据,然后通过JNI往上传View层,软件层面的工作就结束了。

P.S. 详细信息待补充

Qualcomm Camera HAL 2.0

我们知道在HAL的Vendor实现当中会动态去load一个名字为camera.$platform$.so的档案,然后去加载Android HAL当中定义的方法,这里以Camera HAL 2.0并且Qualcomm msm8960为例子看下,结合之前的一篇文章(http://guoh.org/lifelog/2013/07/glance-at-camera-hal-2-0/)。

(注:这篇文章已经草稿比较久了,但是一直没有发出来,因为手里的这版代码没有设备可以跑,另外也无法确定代码是否完全正确,至少发现了一些地方都是stub实现,文中可能存在一些错误,如发现不正确的地方欢迎指出,我也会尽量发现错误并修正!)

我们知道在camera2.h当中定义了很多方法,那么在msm8960 HAL就是在如下地方
/path/to/qcam-hal/QCamera/HAL2
这编译出来就是一个camera.$platform$.so,请看它的实现
首先是HAL2/wrapper/QualcommCamera.h|cpp

/**
 * The functions need to be provided by the camera HAL.
 *
 * If getNumberOfCameras() returns N, the valid cameraId for getCameraInfo()
 * and openCameraHardware() is 0 to N-1.
 */

static hw_module_methods_t camera_module_methods = {
    open: camera_device_open,
};

static hw_module_t camera_common  = {
    tag: HARDWARE_MODULE_TAG,
    module_api_version: CAMERA_MODULE_API_VERSION_2_0, // 这样Camera Service才会去初始化Camera2Client一系列
    hal_api_version: HARDWARE_HAL_API_VERSION,
    id: CAMERA_HARDWARE_MODULE_ID,
    name: "Qcamera",
    author:"Qcom",
    methods: &camera_module_methods,
    dso: NULL,
    reserved:  {0},
};

camera_module_t HAL_MODULE_INFO_SYM = { // 这个HMI,每个HAL模块都必须有的
    common: camera_common,
    get_number_of_cameras: get_number_of_cameras,
    get_camera_info: get_camera_info,
};

camera2_device_ops_t camera_ops = { // 注意这些绑定的函数
    set_request_queue_src_ops:           android::set_request_queue_src_ops,
    notify_request_queue_not_empty:      android::notify_request_queue_not_empty,
    set_frame_queue_dst_ops:             android::set_frame_queue_dst_ops,
    get_in_progress_count:               android::get_in_progress_count,
    flush_captures_in_progress:          android::flush_captures_in_progress,
    construct_default_request:           android::construct_default_request,

    allocate_stream:                     android::allocate_stream,
    register_stream_buffers:             android::register_stream_buffers,
    release_stream:                      android::release_stream,

    allocate_reprocess_stream:           android::allocate_reprocess_stream,
    allocate_reprocess_stream_from_stream: android::allocate_reprocess_stream_from_stream,
    release_reprocess_stream:            android::release_reprocess_stream,

    trigger_action:                      android::trigger_action,
    set_notify_callback:                 android::set_notify_callback,
    get_metadata_vendor_tag_ops:         android::get_metadata_vendor_tag_ops,
    dump:                                android::dump,
};

typedef struct { // 注意这个是Qualcomm自己定义的一个wrap结构
  camera2_device_t hw_dev; // 这里是标准的
  QCameraHardwareInterface *hardware;
  int camera_released;
  int cameraId;
} camera_hardware_t;

/* HAL should return NULL if it fails to open camera hardware. */
extern "C" int  camera_device_open(
  const struct hw_module_t* module, const char* id,
          struct hw_device_t** hw_device)
{
    int rc = -1;
    int mode = 0;
    camera2_device_t *device = NULL;
    if (module && id && hw_device) {
        int cameraId = atoi(id);

        if (!strcmp(module->name, camera_common.name)) {
            camera_hardware_t *camHal =
                (camera_hardware_t *) malloc(sizeof (camera_hardware_t));
            if (!camHal) {
                *hw_device = NULL;
	        	ALOGE("%s:  end in no mem", __func__);
				return rc;
	    	}
		    /* we have the camera_hardware obj malloced */
		    memset(camHal, 0, sizeof (camera_hardware_t));
		    camHal->hardware = new QCameraHardwareInterface(cameraId, mode);
		    if (camHal->hardware && camHal->hardware->isCameraReady()) {
				camHal->cameraId = cameraId;
		    	device = &camHal->hw_dev; // 这里camera2_device_t
		        device->common.close = close_camera_device; // 初始化camera2_device_t
		        device->common.version = CAMERA_DEVICE_API_VERSION_2_0;
		        device->ops = &camera_ops;
		        device->priv = (void *)camHal;
		        rc =  0;
		    } else {
		        if (camHal->hardware) {
		            delete camHal->hardware;
		            camHal->hardware = NULL;
		        }
		        free(camHal);
		        device = NULL;
		    }
        }
    }
    /* pass actual hw_device ptr to framework. This amkes that we actally be use memberof() macro */
    *hw_device = (hw_device_t*)&device->common; // 这就是kernel或者Android native framework常用的一招
    return rc;
}

看看allocate stream

int allocate_stream(const struct camera2_device *device,
        uint32_t width,
        uint32_t height,
        int      format,
        const camera2_stream_ops_t *stream_ops,
        uint32_t *stream_id,
        uint32_t *format_actual,
        uint32_t *usage,
        uint32_t *max_buffers)
{
    QCameraHardwareInterface *hardware = util_get_Hal_obj(device);
	hardware->allocate_stream(width, height, format, stream_ops,
            stream_id, format_actual, usage, max_buffers);
    return rc;
}

这里注意QCameraHardwareInterface在QCameraHWI.h|cpp当中

int QCameraHardwareInterface::allocate_stream(
    uint32_t width,
    uint32_t height, int format,
    const camera2_stream_ops_t *stream_ops,
    uint32_t *stream_id,
    uint32_t *format_actual,
    uint32_t *usage,
    uint32_t *max_buffers)
{
    int ret = OK;
    QCameraStream *stream = NULL;
    camera_mode_t myMode = (camera_mode_t)(CAMERA_MODE_2D|CAMERA_NONZSL_MODE);

    stream = QCameraStream_preview::createInstance(
                        mCameraHandle->camera_handle,
                        mChannelId,
                        width,
                        height,
                        format,
                        mCameraHandle,
                        myMode);

    stream->setPreviewWindow(stream_ops); // 这里,也就是只要通过该方法创建的stream,都会有对应的ANativeWindow进来
    *stream_id = stream->getStreamId();
    *max_buffers= stream->getMaxBuffers(); // 从HAL得到的
    *usage = GRALLOC_USAGE_HW_CAMERA_WRITE | CAMERA_GRALLOC_HEAP_ID
        | CAMERA_GRALLOC_FALLBACK_HEAP_ID;
    /* Set to an arbitrary format SUPPORTED by gralloc */
    *format_actual = HAL_PIXEL_FORMAT_YCrCb_420_SP;

    return ret;
}

QCameraStream_preview::createInstance直接调用自己的构造方法,也就是下面
(相关class在QCameraStream.h|cpp和QCameraStream_Preview.cpp)

QCameraStream_preview::QCameraStream_preview(uint32_t CameraHandle,
                        uint32_t ChannelId,
                        uint32_t Width,
                        uint32_t Height,
                        int requestedFormat,
                        mm_camera_vtbl_t *mm_ops,
                        camera_mode_t mode) :
                 QCameraStream(CameraHandle,
                        ChannelId,
                        Width,
                        Height,
                        mm_ops,
                        mode),
                 mLastQueuedFrame(NULL),
                 mDisplayBuf(NULL),
                 mNumFDRcvd(0)
{
    mStreamId = allocateStreamId(); // 分配stream id(根据mStreamTable)

    switch (requestedFormat) { // max buffer number
    case CAMERA2_HAL_PIXEL_FORMAT_OPAQUE:
        mMaxBuffers = 5;
        break;
    case HAL_PIXEL_FORMAT_BLOB:
        mMaxBuffers = 1;
        break;
    default:
        ALOGE("Unsupported requested format %d", requestedFormat);
        mMaxBuffers = 1;
        break;
    }
    /*TODO: There has to be a better way to do this*/
}

再看看
/path/to/qcam-hal/QCamera/stack/mm-camera-interface/
mm_camera_interface.h
当中

typedef struct {
    uint32_t camera_handle;        /* camera object handle */
    mm_camera_info_t *camera_info; /* reference pointer of camear info */
    mm_camera_ops_t *ops;          /* API call table */
} mm_camera_vtbl_t;

mm_camera_interface.c
当中

/* camera ops v-table */
static mm_camera_ops_t mm_camera_ops = {
    .sync = mm_camera_intf_sync,
    .is_event_supported = mm_camera_intf_is_event_supported,
    .register_event_notify = mm_camera_intf_register_event_notify,
    .qbuf = mm_camera_intf_qbuf,
    .camera_close = mm_camera_intf_close,
    .query_2nd_sensor_info = mm_camera_intf_query_2nd_sensor_info,
    .is_parm_supported = mm_camera_intf_is_parm_supported,
    .set_parm = mm_camera_intf_set_parm,
    .get_parm = mm_camera_intf_get_parm,
    .ch_acquire = mm_camera_intf_add_channel,
    .ch_release = mm_camera_intf_del_channel,
    .add_stream = mm_camera_intf_add_stream,
    .del_stream = mm_camera_intf_del_stream,
    .config_stream = mm_camera_intf_config_stream,
    .init_stream_bundle = mm_camera_intf_bundle_streams,
    .destroy_stream_bundle = mm_camera_intf_destroy_bundle,
    .start_streams = mm_camera_intf_start_streams,
    .stop_streams = mm_camera_intf_stop_streams,
    .async_teardown_streams = mm_camera_intf_async_teardown_streams,
    .request_super_buf = mm_camera_intf_request_super_buf,
    .cancel_super_buf_request = mm_camera_intf_cancel_super_buf_request,
    .start_focus = mm_camera_intf_start_focus,
    .abort_focus = mm_camera_intf_abort_focus,
    .prepare_snapshot = mm_camera_intf_prepare_snapshot,
    .set_stream_parm = mm_camera_intf_set_stream_parm,
    .get_stream_parm = mm_camera_intf_get_stream_parm
};

以start stream为例子

mm_camera_intf_start_streams(mm_camera_interface
    mm_camera_start_streams(mm_camera
    	mm_channel_fsm_fn(mm_camera_channel
    		mm_channel_fsm_fn_active(mm_camera_channel
    			mm_channel_start_streams(mm_camera_channel
    				mm_stream_fsm_fn(mm_camera_stream
    					mm_stream_fsm_reg(mm_camera_stream
    						mm_camera_cmd_thread_launch(mm_camera_data
    						mm_stream_streamon(mm_camera_stream

注意:本文当中,如上这种梯度摆放,表示是调用关系,如果梯度是一样的,就表示这些方法是在上层同一个方法里面被调用的

int32_t mm_stream_streamon(mm_stream_t *my_obj)
{
    int32_t rc;
    enum v4l2_buf_type buf_type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE;

    /* Add fd to data poll thread */
    rc = mm_camera_poll_thread_add_poll_fd(&my_obj->ch_obj->poll_thread[0],
                                           my_obj->my_hdl,
                                           my_obj->fd,
                                           mm_stream_data_notify,
                                           (void*)my_obj);
    if (rc < 0) {
        return rc;
    }
    rc = ioctl(my_obj->fd, VIDIOC_STREAMON, &buf_type);
    if (rc < 0) {
        CDBG_ERROR("%s: ioctl VIDIOC_STREAMON failed: rc=%d\n",
                   __func__, rc);
        /* remove fd from data poll thread in case of failure */
        mm_camera_poll_thread_del_poll_fd(&my_obj->ch_obj->poll_thread[0], my_obj->my_hdl);
    }
    return rc;
}

看到ioctl,VIDIOC_STREAMON,可以高兴一下了,这就是V4L2规范当中用户空间和内核空间通信的方法,V4L2(Video for Linux Two)是一种经典而且成熟的视频通信协议,之前是V4L,不清楚的可以去下载它的规范,另外The Video4Linux2(http://lwn.net/Articles/203924/)也是很好的资料。
这里简单介绍下:

open(VIDEO_DEVICE_NAME, …) // 开启视频设备,一般在程序初始化的时候调用

ioctl(…) // 主要是一些需要传输数据量很小的控制操作
这里可以用的参数很多,并且通常来说我们会按照以下方式来使用,比如
VIDIOC_QUERYCAP // 查询设备能干什么
VIDIOC_CROPCAP // 查询设备crop能力
VIDIOC_S_* // set/get方法,设置/获取参数
VIDIOC_G_*
VIDIOC_REQBUFS // 分配buffer,可以有多种方式
VIDIOC_QUERYBUF // 查询分配的buffer的信息
VIDIOC_QBUF // QUEUE BUFFER 把buffer压入DRV缓存队列(这时候buffer是空的)
VIDIOC_STREAMON // 开始视频数据传输
VIDIOC_DQBUF // DEQUEUE BUFFER 把buffer从DRV缓存队列中取出(这时候buffer是有数据的)

[0…n]
QBUF -> DQBUF // 可以一直重复这个动作

VIDIOC_STREAMOFF // 停止视频数据传输

close(VIDEO_DEVICE_FD) // 关闭设备
上面就是主要的函数和简单的调用顺序,另外还有几个函数

select() // 等待事件发生,主要用在我们把存frame的buffer推给DRV以后,等待它的反应
mmap/munmap // 主要处理我们request的buffer的,buffer分配在设备的内存空间的时候需要

并且看看mm_camera_stream这个文件里面也都是这么实现的。

看完这里,我们回过头来继续看QCam HAL,当然它实现的细节也不是我上面start stream所列的那么简单,但是其实也不算复杂,觉得重要的就是状态和用到的结构。

首先是channel状态,目前只支持1个channel,但是可以有多个streams(后面会介绍,而且目前最多支持8个streams)

/* mm_channel */
typedef enum {
    MM_CHANNEL_STATE_NOTUSED = 0,   /* not used */
    MM_CHANNEL_STATE_STOPPED,       /* stopped */
    MM_CHANNEL_STATE_ACTIVE,        /* active, at least one stream active */
    MM_CHANNEL_STATE_PAUSED,        /* paused */
    MM_CHANNEL_STATE_MAX
} mm_channel_state_type_t;

它可以执行的事件

typedef enum {
    MM_CHANNEL_EVT_ADD_STREAM,
    MM_CHANNEL_EVT_DEL_STREAM,
    MM_CHANNEL_EVT_START_STREAM,
    MM_CHANNEL_EVT_STOP_STREAM,
    MM_CHANNEL_EVT_TEARDOWN_STREAM,
    MM_CHANNEL_EVT_CONFIG_STREAM,
    MM_CHANNEL_EVT_PAUSE,
    MM_CHANNEL_EVT_RESUME,
    MM_CHANNEL_EVT_INIT_BUNDLE,
    MM_CHANNEL_EVT_DESTROY_BUNDLE,
    MM_CHANNEL_EVT_REQUEST_SUPER_BUF,
    MM_CHANNEL_EVT_CANCEL_REQUEST_SUPER_BUF,
    MM_CHANNEL_EVT_START_FOCUS,
    MM_CHANNEL_EVT_ABORT_FOCUS,
    MM_CHANNEL_EVT_PREPARE_SNAPSHOT,
    MM_CHANNEL_EVT_SET_STREAM_PARM,
    MM_CHANNEL_EVT_GET_STREAM_PARM,
    MM_CHANNEL_EVT_DELETE,
    MM_CHANNEL_EVT_MAX
} mm_channel_evt_type_t;
/* mm_stream */
typedef enum { // 这里的状态要仔细,每执行一次方法,状态就需要变化
    MM_STREAM_STATE_NOTUSED = 0,      /* not used */
    MM_STREAM_STATE_INITED,           /* inited  */
    MM_STREAM_STATE_ACQUIRED,         /* acquired, fd opened  */
    MM_STREAM_STATE_CFG,              /* fmt & dim configured */
    MM_STREAM_STATE_BUFFED,           /* buf allocated */
    MM_STREAM_STATE_REG,              /* buf regged, stream off */
    MM_STREAM_STATE_ACTIVE_STREAM_ON, /* active with stream on */
    MM_STREAM_STATE_ACTIVE_STREAM_OFF, /* active with stream off */
    MM_STREAM_STATE_MAX
} mm_stream_state_type_t;

同样,stream可以执行的事件

typedef enum {
    MM_STREAM_EVT_ACQUIRE,
    MM_STREAM_EVT_RELEASE,
    MM_STREAM_EVT_SET_FMT,
    MM_STREAM_EVT_GET_BUF,
    MM_STREAM_EVT_PUT_BUF,
    MM_STREAM_EVT_REG_BUF,
    MM_STREAM_EVT_UNREG_BUF,
    MM_STREAM_EVT_START,
    MM_STREAM_EVT_STOP,
    MM_STREAM_EVT_QBUF,
    MM_STREAM_EVT_SET_PARM,
    MM_STREAM_EVT_GET_PARM,
    MM_STREAM_EVT_MAX
} mm_stream_evt_type_t;

这里每次执行函数的时候都需要检查channel/stream的状态,只有状态正确的时候才会去执行

比如你可以观察到
mm_channel的mm_channel_state_type_t state;
mm_stream的mm_stream_state_type_t state;
均表示这个结构当前的状态

另外
struct mm_camera_obj
struct mm_channel
struct mm_stream
这三个也是自上而下包含的,并且stream和channel还会持有父结构(暂且这么称呼,实际为container关系)的引用。

实际上Vendor的HAL每个都有自己实现的方法,也可能包含很多特有的东西,比如这里它会喂给ioctl一些特有的命令或者数据结构,这些我们就只有在做特定平台的时候去考虑了。这些都可能千变万化,比如OMAP4它同DRV沟通是透过rpmsg,并用OpenMAX的一套规范来实现的。

理论就这么多,接着看一个实例,比如我们在Camera Service要去start preview:

Camera2Client::startPreviewL
	StreamingProcessor->updatePreviewStream
		Camera2Device->createStream
			StreamAdapter->connectToDevice
				camera2_device_t->ops->allocate_stream // 上面有分析
				native_window_api_*或者native_window_*

	StreamingProcessor->startStream
		Camera2Device->setStreamingRequest
			Camera2Device::RequestQueue->setStreamSlot // 创建一个stream slot
				Camera2Device::RequestQueue->signalConsumerLocked
status_t Camera2Device::MetadataQueue::signalConsumerLocked() {
    status_t res = OK;
    notEmpty.signal();
    if (mSignalConsumer && mDevice != NULL) {
        mSignalConsumer = false;
        mMutex.unlock();
        res = mDevice->ops->notify_request_queue_not_empty(mDevice); // 通知Vendor HAL的run command thread去运行,
        															 // notify_request_queue_not_empty这个事件不是每次都会触发的,只有初始化时候
        															 // 或者run command thread在dequeue的时候发现数据为NULL,
        															 // 而Camera Service之变又有新的request进来的时候才会去触发
        															 // 可以说是减轻负担吧,不用没有请求的时候,thread也一直在那里
        															 // 不过通常碰到这样的情况都是利用锁让thread停在那里
        mMutex.lock();
    }
    return res;
}

然而在Qualcomm HAL当中

int notify_request_queue_not_empty(const struct camera2_device *device) // 这个方法注册到camera2_device_ops_t当中
	QCameraHardwareInterface->notify_request_queue_not_empty()
		pthread_create(&mCommandThread, &attr, command_thread, (void *)this) != 0)
void *command_thread(void *obj)
{
	...
	pme->runCommandThread(obj);
}
void QCameraHardwareInterface::runCommandThread(void *data)
{
    /**
     * This function implements the main service routine for the incoming
     * frame requests, this thread routine is started everytime we get a 
     * notify_request_queue_not_empty trigger, this thread makes the 
     * assumption that once it receives a NULL on a dequest_request call 
     * there will be a fresh notify_request_queue_not_empty call that is
     * invoked thereby launching a new instance of this thread. Therefore,
     * once we get a NULL on a dequeue request we simply let this thread die
     */ 
    int res;
    camera_metadata_t *request=NULL;
    mPendingRequests=0;

    while (mRequestQueueSrc) { // mRequestQueueSrc是通过set_request_queue_src_ops设置进来的
    						   // 参见Camera2Device::MetadataQueue::setConsumerDevice
    						   // 在Camera2Device::initialize当中被调用
        ALOGV("%s:Dequeue request using mRequestQueueSrc:%p",__func__,mRequestQueueSrc);
        mRequestQueueSrc->dequeue_request(mRequestQueueSrc, &request); // 取framework request
        if (request==NULL) {
            ALOGE("%s:No more requests available from src command \
                    thread dying",__func__);
            return;
        }
        mPendingRequests++;

        /* Set the metadata values */

        /* Wait for the SOF for the new metadata values to be applied */

        /* Check the streams that need to be active in the stream request */
        sort_camera_metadata(request);

        camera_metadata_entry_t streams;
        res = find_camera_metadata_entry(request,
                ANDROID_REQUEST_OUTPUT_STREAMS,
                &streams);
        if (res != NO_ERROR) {
            ALOGE("%s: error reading output stream tag", __FUNCTION__);
            return;
        }

        res = tryRestartStreams(streams); // 会去prepareStream和streamOn,后面有详细代码
        if (res != NO_ERROR) {
            ALOGE("error tryRestartStreams %d", res);
            return;
        }

        /* 3rd pass: Turn on all streams requested */
        for (uint32_t i = 0; i < streams.count; i++) {
            int streamId = streams.data.u8[i];
            QCameraStream *stream = QCameraStream::getStreamAtId(streamId);

            /* Increment the frame pending count in each stream class */

            /* Assuming we will have the stream obj in had at this point may be
             * may be multiple objs in which case we loop through array of streams */
            stream->onNewRequest();
        }
        ALOGV("%s:Freeing request using mRequestQueueSrc:%p",__func__,mRequestQueueSrc);
        /* Free the request buffer */
        mRequestQueueSrc->free_request(mRequestQueueSrc,request);
        mPendingRequests--;
        ALOGV("%s:Completed request",__func__);
    }
 
    QCameraStream::streamOffAll();
}

下面这个方法解释mRequestQueueSrc来自何处

// Connect to camera2 HAL as consumer (input requests/reprocessing)
status_t Camera2Device::MetadataQueue::setConsumerDevice(camera2_device_t *d) {
    ATRACE_CALL();
    status_t res;
    res = d->ops->set_request_queue_src_ops(d,
            this);
    if (res != OK) return res;
    mDevice = d;
    return OK;
}

因为

QCameraStream_preview->prepareStream
	QCameraStream->initStream
		mm_camera_vtbl_t->ops->add_stream(... stream_cb_routine ...) // 这是用来返回数据的callback,带mm_camera_super_buf_t*和void*两参数
			mm_camera_add_stream
				mm_channel_fsm_fn(..., MM_CHANNEL_EVT_ADD_STREAM, ..., mm_evt_paylod_add_stream_t)
					mm_channel_fsm_fn_stopped
						mm_channel_add_stream(..., mm_camera_buf_notify_t, ...)
							mm_stream_fsm_inited


在mm_channel_add_stream当中有把mm_camera_buf_notify_t包装到mm_stream_t

mm_stream_t *stream_obj = NULL;
/* initialize stream object */
memset(stream_obj, 0, sizeof(mm_stream_t));
/* cd through intf always palced at idx 0 of buf_cb */
stream_obj->buf_cb[0].cb = buf_cb; // callback
stream_obj->buf_cb[0].user_data = user_data;
stream_obj->buf_cb[0].cb_count = -1; /* infinite by default */ // 默认无限次数

并且mm_stream_fsm_inited,传进来的event参数也是MM_STREAM_EVT_ACQUIRE

int32_t mm_stream_fsm_inited(mm_stream_t *my_obj,
                             mm_stream_evt_type_t evt,
                             void * in_val,
                             void * out_val)
{
    int32_t rc = 0;
    char dev_name[MM_CAMERA_DEV_NAME_LEN];

    switch (evt) {
    case MM_STREAM_EVT_ACQUIRE:
        if ((NULL == my_obj->ch_obj) || (NULL == my_obj->ch_obj->cam_obj)) {
            CDBG_ERROR("%s: NULL channel or camera obj\n", __func__);
            rc = -1;
            break;
        }

        snprintf(dev_name, sizeof(dev_name), "/dev/%s",
                 mm_camera_util_get_dev_name(my_obj->ch_obj->cam_obj->my_hdl));

        my_obj->fd = open(dev_name, O_RDWR | O_NONBLOCK); // 打开视频设备
        if (my_obj->fd <= 0) {
            CDBG_ERROR("%s: open dev returned %d\n", __func__, my_obj->fd);
            rc = -1;
            break;
        }
        rc = mm_stream_set_ext_mode(my_obj);
        if (0 == rc) {
            my_obj->state = MM_STREAM_STATE_ACQUIRED; // mm_stream_state_type_t
        } else {
            /* failed setting ext_mode
             * close fd */
            if(my_obj->fd > 0) {
                close(my_obj->fd);
                my_obj->fd = -1;
            }
            break;
        }
        rc = get_stream_inst_handle(my_obj);
        if(rc) {
            if(my_obj->fd > 0) {
                close(my_obj->fd);
                my_obj->fd = -1;
            }
        }
        break;
    default:
        CDBG_ERROR("%s: Invalid evt=%d, stream_state=%d",
                   __func__,evt,my_obj->state);
        rc = -1;
        break;
    }
    return rc;
}

还有

QCameraStream->streamOn
	mm_camera_vtbl_t->ops->start_streams
		mm_camera_intf_start_streams
			mm_camera_start_streams
				mm_channel_fsm_fn(..., MM_CHANNEL_EVT_START_STREAM, ...)
					mm_stream_fsm_fn(..., MM_STREAM_EVT_START, ...)
						mm_camera_cmd_thread_launch // 启动CB线程
						mm_stream_streamon(mm_stream_t)
							mm_camera_poll_thread_add_poll_fd(..., mm_stream_data_notify , ...)

static void mm_stream_data_notify(void* user_data)
{
    mm_stream_t *my_obj = (mm_stream_t*)user_data;
    int32_t idx = -1, i, rc;
    uint8_t has_cb = 0;
    mm_camera_buf_info_t buf_info;

    if (NULL == my_obj) {
        return;
    }

    if (MM_STREAM_STATE_ACTIVE_STREAM_ON != my_obj->state) {
        /* this Cb will only received in active_stream_on state
         * if not so, return here */
        CDBG_ERROR("%s: ERROR!! Wrong state (%d) to receive data notify!",
                   __func__, my_obj->state);
        return;
    }

    memset(&buf_info, 0, sizeof(mm_camera_buf_info_t));

    pthread_mutex_lock(&my_obj->buf_lock);
    rc = mm_stream_read_msm_frame(my_obj, &buf_info); // 通过ioctl(..., VIDIOC_DQBUF, ...)读取frame数据
    if (rc != 0) {
        pthread_mutex_unlock(&my_obj->buf_lock);
        return;
    }
    idx = buf_info.buf->buf_idx;

    /* update buffer location */
    my_obj->buf_status[idx].in_kernel = 0;

    /* update buf ref count */
    if (my_obj->is_bundled) {
        /* need to add into super buf since bundled, add ref count */
        my_obj->buf_status[idx].buf_refcnt++;
    }

    for (i=0; i < MM_CAMERA_STREAM_BUF_CB_MAX; i++) {
        if(NULL != my_obj->buf_cb[i].cb) {
            /* for every CB, add ref count */
            my_obj->buf_status[idx].buf_refcnt++;
            has_cb = 1;
        }
    }
    pthread_mutex_unlock(&my_obj->buf_lock);

    mm_stream_handle_rcvd_buf(my_obj, &buf_info); // mm_camera_queue_enq,往queue里面丢frame数据(
    											  // 前提是有注册callback),并透过sem_post通知queue
    											  // 然后mm_camera_cmd_thread_launch启动的线程会
    											  // 轮循读取数据,然后执行CB
}

这样就会导致在stream on的时候stream_cb_routine(实现在QCameraStream当中)就会一直执行

void stream_cb_routine(mm_camera_super_buf_t *bufs,
                       void *userdata)
{
    QCameraStream *p_obj=(QCameraStream*) userdata;
    switch (p_obj->mExtImgMode) { // 这个mode在prepareStream的时候就会确定
    case MM_CAMERA_PREVIEW:
        ALOGE("%s : callback for MM_CAMERA_PREVIEW", __func__);
        ((QCameraStream_preview *)p_obj)->dataCallback(bufs); // CAMERA_PREVIEW和CAMERA_VIDEO是一样的?
        break;
    case MM_CAMERA_VIDEO:
        ALOGE("%s : callback for MM_CAMERA_VIDEO", __func__);
        ((QCameraStream_preview *)p_obj)->dataCallback(bufs);
        break;
    case MM_CAMERA_SNAPSHOT_MAIN:
        ALOGE("%s : callback for MM_CAMERA_SNAPSHOT_MAIN", __func__);
        p_obj->p_mm_ops->ops->qbuf(p_obj->mCameraHandle,
                                   p_obj->mChannelId,
                                   bufs->bufs[0]);
		break;
	case MM_CAMERA_SNAPSHOT_THUMBNAIL:
		break;
	default:
		break;
    }
}
void QCameraStream::dataCallback(mm_camera_super_buf_t *bufs)
{
    if (mPendingCount != 0) { // 这个dataCallback是一直在都在回来么?
    						   // 而且从代码来看设置下去的callback次数默认是-1,-1就表示infinite。
    						   // 似乎只能这样才能解释,否则没人触发的话,即使mPendingCount在onNewRequest当中加1了
    						   // 这里也感知不到
        ALOGD("Got frame request");
        pthread_mutex_lock(&mFrameDeliveredMutex);
        mPendingCount--;
        ALOGD("Completed frame request");
        pthread_cond_signal(&mFrameDeliveredCond);
        pthread_mutex_unlock(&mFrameDeliveredMutex);
        processPreviewFrame(bufs);
    } else {
        p_mm_ops->ops->qbuf(mCameraHandle,
                mChannelId, bufs->bufs[0]); // 如果没有需要数据的情况,直接把buffer压入DRV的队列当中,会call到V4L2的QBUF
    }
}

比较好奇的是在手里这版QCam HAL的code当中camera2_frame_queue_dst_ops_t没有被用到

int QCameraHardwareInterface::set_frame_queue_dst_ops(
    const camera2_frame_queue_dst_ops_t *frame_dst_ops)
{
    mFrameQueueDst = frame_dst_ops; // 这个现在似乎没有用到嘛
    return OK;
}

这样Camera Service的FrameProcessor的Camera2Device->getNextFrame就永远也获取不到数据,不知道是不是我手里的这版代码的问题,而且在最新的Qualcomm Camera HAL代码也不在AOSP树当中了,而是直接以proprietary形式给的so档,这只是题外话。

所以总体来看,这里可能有几个QCameraStream,每个stream负责自己的事情。
他们之间也有相互关系,比如有可能新的stream进来会导致其他已经stream-on的stream重新启动。

在Camera HAL 2.0当中我们还有个重点就是re-process stream
简单的说就是把output stream作为input stream再次添加到BufferQueue中,让其他的consumer来处理,就类似一个chain一样。
目前在ZslProcessor当中有用到。

ZslProcessor->updateStream
	Camera2Device->createStream
	Camera2Device->createReprocessStreamFromStream // release的时候是先delete re-process
		new ReprocessStreamAdapter
		ReprocessStreamAdapter->connectToDevice
			camera2_device_t->ops->allocate_reprocess_stream_from_stream

这里ReprocessStreamAdapter实际就是camera2_stream_in_ops_t,负责管理re-process的stream。

但是这版的代码Qualcomm也似乎没有去实现,所以暂时到此为止,如果后面找到相应的代码,再来看。

所以看完这么多不必觉得惊讶,站在Camera Service的立场,它持有两个MetadataQueue,mRequestQueue和mFrameQueue。
app请求的动作,比如set parameter/start preview/start recording会直接转化为request,放到mRequestQueue,然后去重启preview/recording stream。
比如capture也会转换为request,放到mRequestQueue。
如果有必要,会通过notify_request_queue_not_empty去通知QCam HAL有请求需要处理,然后QCam HAL会启动一个线程(QCameraHardwareInterface::runCommandThread)去做处理。直到所有request处理完毕退出线程。
在这个处理的过程当中会分别调用到每个stream的processPreviewFrame,有必要的话它每个都会调用自己后续的callback。
还有一个实现的细节就是,stream_cb_routine是从start stream就有开始注册在同一个channel上的,而stream_cb_routine间接调用QCameraStream::dataCallback(当然stream_cb_routine有去指定这个callback回来的原因是什么,就好调用对应的dataCallback),这个callback是一直都在回来,所以每次new request让mPendingCount加1之后,dataCallback回来才会调用processPreviewFrame,否则就直接把buffer再次压回DRV队列当中。

void QCameraStream::dataCallback(mm_camera_super_buf_t *bufs)
{
    if (mPendingCount != 0) { // 这个dataCallback是一直在都在回来么?
    						   // 而且从代码来看设置下去的callback次数默认是-1,-1就表示infinite。
    						   // 似乎只能这样才能解释,否则没人触发的话,即使mPendingCount在onNewRequest当中加1了
    						   // 这里也感知不到
        ALOGD("Got frame request");
        pthread_mutex_lock(&mFrameDeliveredMutex);
        mPendingCount--;
        ALOGD("Completed frame request");
        pthread_cond_signal(&mFrameDeliveredCond);
        pthread_mutex_unlock(&mFrameDeliveredMutex);
        processPreviewFrame(bufs);
    } else {
        p_mm_ops->ops->qbuf(mCameraHandle,
                mChannelId, bufs->bufs[0]); // 如果没有需要数据的情况,直接把buffer压入DRV的队列当中,会call到V4L2的QBUF
    }
}
void QCameraStream::onNewRequest()
{
    ALOGI("%s:E",__func__);
    pthread_mutex_lock(&mFrameDeliveredMutex);
    ALOGI("Sending Frame request");
    mPendingCount++;
    pthread_cond_wait(&mFrameDeliveredCond, &mFrameDeliveredMutex); // 等带一个请求处理完,再做下一个请求
    ALOGV("Got frame");
    pthread_mutex_unlock(&mFrameDeliveredMutex);
    ALOGV("%s:X",__func__);
}

processPreviewFrame会调用到创建这个stream的时候关联进来的那个BufferQueue的enqueue_buffer方法,把数据塞到BufferQueue中,然后对应的consumer就会收到了。
比如在Android Camera HAL 2.0当中目前有
camera2/BurstCapture.h
camera2/CallbackProcessor.h
camera2/JpegProcessor.h
camera2/StreamingProcessor.h
camera2/ZslProcessor.h
实现了对应的Consumer::FrameAvailableListener,但是burst-capture现在可以不考虑,因为都还只是stub实现。

ZslProcessor.h和CaptureSequencer.h都有去实现FrameProcessor::FilteredListener的onFrameAvailable(…)
但是我们之前讲过这版QCam HAL没有实现,所以FrameProcessor是无法获取到meta data的。
所以这样来看onFrameAbailable都不会得到通知。(我相信是我手里的这版代码的问题啦)

之前我们说过QCam HAL有部分东西没有实现,所以mFrameQueue就不会有数据,但是它本来应该是DRV回来的元数据会queue到这里面。

另外
CaptureSequencer.h还有去实现onCaptureAvailable,当JpegProcessor处理完了会通知它。

好奇?多个stream(s)不是同时返回的,这样如果CPU处理快慢不同就会有时间差?还有很好奇DRV是如何处理Video snapshot的,如果buffer是顺序的,就会存在Video少一个frame,如果不是顺序的,那就是DRV一次返回多个buffer?以前真没有想过这个问题@_@