在 React Native 中实现相机功能

**作者:Chimezie Innocent✏️**

我将在本文中介绍一些内容,包括如何从现已弃用的“react-native-camera”迁移到强大的“react-native-vision-camera”,处理权限,优化性能,以及实现自定义 UI 和人脸检测等高级用例等功能。

最后,您将拥有构建可用于生产环境的相机体验的工具,以满足您的应用需求。我还将在本文中将“react-native-vision-camera”称为 VisionCamera。

安装 VisionCamera

对于本文,我们将使用“react-native-vision-camera”包。

要安装 VisionCamera,请运行以下命令:

/* Expo */
npx expo install react-native-vision-camera

接下来,我们将通过将 VisionCamera 添加到“app.json”文件中的插件数组来配置它:

{
  "name": "camera-app",
  "slug": "camera-app",
  "plugins": [
    [
      "react-native-vision-camera",
      {
        "cameraPermissionText": "$(PRODUCT_NAME) needs access to your Camera.",
        "enableMicrophonePermission": true,
        "microphonePermissionText": "$(PRODUCT_NAME) needs access to your Microphone."
      }
    ]
  ]
}

接下来,安装“expo-dev-client”包。此包将默认的应用内开发工具替换为支持网络调试、支持启动更新等的工具。要安装该包,请运行以下命令:

npx expo install expo-dev-client

预先构建您的应用程序以编译并应用更改:

npx expo prebuild

最后,使用以下命令在您的设备上运行开发版本:

eas build --profile development --platform android

如果您遇到“无法将元数据上传到 EAS Build”错误,这是因为 eas 不知道您要上传哪些文件。要修复此错误,只需使用“git init”初始化您的目录即可。

从 react-native-camera 迁移到 react-native-vision-camera

以前,“react-native-camera”是实现相机功能的首选软件包。它已被弃用且无人维护了一段时间,因此您必须迁移到“react-native-vision-camera”。

典型的 `react-native-camera` 实现如下所示:

import { RNCamera } from 'react-native-camera';

const Camera = () => (
  
);

要使用 `react-native-vision-camera`,请从包中导入相机组件、设备和权限挂钩。`useCameraDevice` 挂钩用于选择要使用的相机类型,即后置或前置摄像头,而 `useCameraPermission` 挂钩允许您请求权限并检查权限状态:

import { useEffect } from "react";
import {
  Camera,
  useCameraDevice,
  useCameraPermission,
} from "react-native-vision-camera";

const NewCamera = () => {
  const device = useCameraDevice("back");
  const { hasPermission, requestPermission } = useCameraPermission();

  useEffect(() => {
    if (!hasPermission) {
      requestPermission();
    }
  }, [hasPermission]);

  if (!hasPermission) return null;
  if (device == null) return null;

  return (
    
  );
};

处理相机权限

在使用相机组件之前,你必须授予应用程序权限。幸运的是,“react-native-vision-camera”有一个钩子可以让我们轻松地做到这一点:

import { useCameraPermission } from "react-native-vision-camera";

export default function HomeScreen() {
  const { hasPermission, requestPermission } = useCameraPermission();

  if (!hasPermission) {
    return (
      
        
          Camera App requires permission.
        
        
          Grant Permission
        
      
    );
  }

return(
  ....
)}

`hasPermission` 返回一个布尔值,告诉我们是否已授予权限。如果用户尚未授予权限,则会显示一个按钮,要求用户授予权限。`react-native-vision-camera` 导出一个名为 `requestPermission` 的函数,我们将其用于此目的,该函数返回一个承诺。

最后,我们可以处理完整的相机权限:

import React from "react";
import { View, Text, StyleSheet, TouchableOpacity } from "react-native";
import {
  Camera,
  useCameraDevice,
  useCameraPermission,
} from "react-native-vision-camera";

export default function HomeScreen() {
  const device = useCameraDevice("back");
  const { hasPermission, requestPermission } = useCameraPermission();

  if (!hasPermission || !device) {
    return (
      
        
          Camera App requires permission.
        
        
          Grant Permission
        
      
    );
  }
  return (
    
      
    
  );
}
const styles = StyleSheet.create({
  container: {
    flex: 1,
    justifyContent: "center",
    alignItems: "center",
    backgroundColor: "black",
  },
  permissionView: {
    flex: 1,
    justifyContent: "center",
    alignItems: "center",
    backgroundColor: "#fff",
  },
  permissionText: {
    fontSize: 18,
    marginBottom: 20,
  },
  permissionButton: {
    backgroundColor: "#007BFF",
    paddingVertical: 12,
    paddingHorizontal: 24,
    borderRadius: 6,
  },
  permissionButtonText: {
    color: "#fff",
    fontSize: 16,
  },
});

在上面的代码中,如果 `hasPermission` 返回 true,表示已授予相机权限,我们将继续使用相机。但如果 state 为 false,则表示未授予权限,我们必须请求权限。最后,我们将使用后置摄像头,因为我们在 `useCameraDevice` 挂钩中明确说明了这一点。

使用案例

现在,我们已经实现了相机权限,我们可以开始实现我们的功能了。让我们深入了解一下我们将要实现的一些相机用例。

拍摄照片

为了拍照,我们将为相机创建一个 ref,并使用该 ref 调用 `takePhoto({})` 方法。`takePhoto({})` 方法接受诸如 flash、`enableShutterSound`、`path`、`enableAutoRedEyeReduction` 等选项。

我们现在只会调用 flash 和 `enableShutterSound`:

const cameraRef = useRef(null);

  const takePhoto = async () => {
    if (cameraRef.current) {
      const { path } = await cameraRef.current.takePhoto({
        flash: "on",
        enableShutterSound: true,
      });
      console.log(path);
      //save to camera roll here
    } else {
      Alert.alert("Camera not ready");
    }
  };

点击按钮,路径将被记录到你的控制台。获取路径后,你可以将其保存到相机胶卷中。

要将图片保存到相机胶卷,请安装此软件包:

npm install @react-native-camera-roll/camera-roll

重建您的应用程序,以便更改或新包能够生效。

该包导出一个“CameraRoll”,它为我们提供对本地相机胶卷或照片库的访问。使用该方法,我们可以像这样保存图像:

const takePhoto = async () => {
    if (cameraRef.current) {
      const { path } = await cameraRef.current.takePhoto({
        flash: "on",
        enableShutterSound: true,
      });
      await CameraRoll.saveAsset(`file://${path}`, {
        type: "photo",
      });
    } else {
      Alert.alert("Camera not ready");
    }
  };

转到相机胶卷,您将看到刚刚捕获的图像。

完整代码如下:

import React, { useRef } from "react";
import { View, Text, Alert, StyleSheet, TouchableOpacity } from "react-native";
import {
  Camera,
  useCameraDevice,
  useCameraPermission,
} from "react-native-vision-camera";
import { CameraRoll } from "@react-native-camera-roll/camera-roll";

export default function HomeScreen() {
  const cameraRef = useRef(null);
  const device = useCameraDevice("back");
  const { hasPermission, requestPermission } = useCameraPermission();

  const takePhoto = async () => {
    if (cameraRef.current) {
      const { path } = await cameraRef.current.takePhoto({
        flash: "on",
        enableShutterSound: true,
      });
      await CameraRoll.saveAsset(`file://${path}`, {
        type: "photo",
      });
    } else {
      Alert.alert("Camera not ready");
    }
  };
  if (!hasPermission || !device) {
    return (
      
        
          Camera App requires permission.
        
        
          Grant Permission
        
      
    );
  }
  return (
    
      
      
        
      
    
  );
}
const styles = StyleSheet.create({
  container: {
    flex: 1,
    justifyContent: "center",
    alignItems: "center",
    backgroundColor: "black",
  },
  permissionView: {
    flex: 1,
    justifyContent: "center",
    alignItems: "center",
    backgroundColor: "#fff",
  },
  permissionText: {
    fontSize: 18,
    marginBottom: 20,
  },
  permissionButton: {
    backgroundColor: "#007BFF",
    paddingVertical: 12,
    paddingHorizontal: 24,
    borderRadius: 6,
  },
  permissionButtonText: {
    color: "#fff",
    fontSize: 16,
  },
  takePhoto: {
    position: "absolute",
    bottom: 80,
    width: 70,
    height: 70,
    borderRadius: 50,
    backgroundColor: "#fff",
    padding: 4,
  },
  takePhotoButton: {
    borderWidth: 2,
    borderColor: "#000",
    backgroundColor: "#fff",
    borderRadius: 50,
    width: "100%",
    height: "100%",
  },
});

录制视频

接下来,我们将实现录制视频。与拍照类似,我们将使用“cameraRef”来录制视频:

const startRecording = async () => {
    if (cameraRef.current) {
      try {
        cameraRef.current.startRecording({
          flash: "off",
          onRecordingError: (error) => console.error("Recording error:", error),
          onRecordingFinished: async ({ path }) => {
            try {
              await CameraRoll.saveAsset(`file://${path}`, { type: "video" });
            } catch (error) {
              console.error("Error saving video:", error);
            }
          },
        });
      } catch (error) {
        console.error("Error during recording:", error);
      }
    }
  };

`startRecording` 接受诸如 `flash`、`fileType`、`onRecordingError`、`onRecordingFinished` 等选项。

`flash` 有两个选项,分别为 `on` 和 `off`,`fileType` 允许您选择视频是 `mp4` 还是 `mov`,`onRecordingError` 允许您在录制视频时捕获运行时错误,而 `onRecordingFinished` 则在录制成功时调用,以便您可以将文件保存到相机胶卷中。

在我们的相机组件中,我们将视频和音频设置为“true”,因为我们想要使用视频功能:


      

      
        
      
    

您可能已经注意到,我们还没有实现结束视频录制的方法,所以我们的视频目前只能录制。

要结束或停止录音,我们将使用一个状态并在录音开始时将其设置为“true”,然后当我们想要结束录音时将其设置为“false”:

const [isRecording, setIsRecording] = useState(false);

  const startRecording = async () => {
    if (cameraRef.current) {
      try {
        if (isRecording) {
          await cameraRef.current.stopRecording();
          setIsRecording(false);
        } else {
          cameraRef.current.startRecording({
            flash: "off",
            onRecordingError: (error) =>
              console.error("Recording error:", error),
            onRecordingFinished: async ({ path }) => {
              try {
                await CameraRoll.saveAsset(`file://${path}`, { type: "video" });
              } catch (error) {
                console.error("Error saving video:", error);
              }
            },
          });
          setIsRecording(true);
        }
      } catch (error) {
        console.error("Error during recording:", error);
      }
    }
  };

类似地,我们将使用条件来更改视频录制开始和停止时的按钮样式。这是为了更好的用户体验:


        

自定义相机 UI

之前,我们实现了照片和视频捕捉,但本节重点介绍如何通过闪光灯切换、相机切换、应用内图库导航和录制计时器等功能增强用户体验。

这是我们将用于定制的`StyleSheet`:

const deviceWidth = Dimensions.get("window").width;
const styles = StyleSheet.create({
  container: {
    flex: 1,
    justifyContent: "center",
    alignItems: "center",
    backgroundColor: "black",
  },
  permissionView: {
    flex: 1,
    justifyContent: "center",
    alignItems: "center",
    backgroundColor: "#fff",
  },
  permissionText: {
    fontSize: 18,
    marginBottom: 20,
  },
  permissionButton: {
    backgroundColor: "#007BFF",
    paddingVertical: 12,
    paddingHorizontal: 24,
    borderRadius: 6,
  },
  permissionButtonText: {
    color: "#fff",
    fontSize: 16,
  },
  galleryGrid: {
    flexDirection: "row",
    flexWrap: "wrap",
    justifyContent: "space-evenly",
    paddingBottom: 50,
  },
  galleryPhotos: {
    width: deviceWidth / 3 - 1,
    height: 150,
    borderWidth: 2,
    borderColor: "#fff",
    marginVertical: 1,
    justifyContent: "flex-start",
    alignItems: "flex-start",
    borderRadius: 8,
  },
  videoTimer: {
    position: "absolute",
    top: 50,
    fontSize: 24,
    fontWeight: "bold",
    color: "white",
  },
  flash: {
    position: "absolute",
    top: 80,
    right: 20,
  },
  slider: {
    position: "absolute",
    bottom: 150,
    width: "70%",
  },
  textOverlayView: {
    position: "absolute",
    top: 80,
    left: 20,
    flexDirection: "row",
    justifyContent: "space-between",
    alignItems: "center",
    width: 100,
  },
  textOverlayText: {
    fontSize: 20,
    color: "#fff",
    paddingVertical: 4,
    paddingHorizontal: 12,
  },
  controlsView: {
    position: "absolute",
    bottom: 30,
    left: 0,
    right: 0,
    alignItems: "center",
    justifyContent: "space-between",
    flexDirection: "row",
    paddingHorizontal: 30,
    width: deviceWidth,
  },
  photogallery: {
    width: 60,
    height: 60,
    borderRadius: 10,
    backgroundColor: "rgba(255, 255, 255, 0.1)",
  },
  takePhoto: {
    width: 70,
    height: 70,
    borderRadius: 50,
    backgroundColor: "#fff",
    padding: 4,
  },
  takePhotoButton: {
    borderWidth: 2,
    borderColor: "#000",
    backgroundColor: "#fff",
    borderRadius: 50,
    width: "100%",
    height: "100%",
  },
  toggleCamera: {
    backgroundColor: "rgba(255, 255, 255, 0.1)",
    borderRadius: 50,
    alignItems: "center",
    justifyContent: "center",
    height: 60,
    width: 60,
  },
  photoContainer: {
    position: "absolute",
    top: 50,
    left: 20,
    right: 20,
    backgroundColor: "rgba(0, 0, 0, 0.5)",
    padding: 10,
    borderRadius: 10,
  },
  photoOverlay: {
    position: "absolute",
    bottom: 80,
    left: 20,
    width: 140,
    height: 170,
    borderRadius: 12,
    borderWidth: 1,
    borderColor: "#fff",
    overflow: "hidden",
    justifyContent: "center",
    alignItems: "center",
    zIndex: 2,
  },
  photoImage: {
    width: "100%",
    height: "100%",
  },
});

首先,我们添加两个按钮和一个图片占位符。中间的按钮用于拍照,右边的按钮用于切换相机,而占位符图片用于打开图库:


        
          
        
        
          
        
        
          
        
      

我们之前安装的相机胶卷包有一个钩子,可以让我们轻松保存图像并从相机胶卷中获取照片。要检索图像,请调用“getPhotos()”方法获取前 20 张图像:

const [photos, getPhotos, save] = useCameraRoll();

将下面的代码复制到您的应用程序中,我将在下面解释:

import React, { useEffect, useRef, useState } from "react";
import {
   View,
  Text,
  Image,
  Alert,
  Pressable,
  ScrollView,
  Dimensions,
  StyleSheet,
  TouchableOpacity,
  Platform,
} from "react-native";
import {
  Camera,
  useCameraDevice,
  useCameraPermission,
} from "react-native-vision-camera";
import {
  CameraRoll,
  PhotoIdentifier,
  useCameraRoll,
} from "@react-native-camera-roll/camera-roll";
import { Entypo, MaterialIcons } from "@expo/vector-icons";
import { SafeAreaView } from "react-native-safe-area-context";
import { hasAndroidPermission } from "@/hooks/usePermission";
export default function HomeScreen() {
  const cameraRef = useRef(null);
  const [photos, getPhotos] = useCameraRoll();
  const timerRef = useRef(null);
  const { hasPermission, requestPermission } = useCameraPermission();
  const [toggleFrontCamera, setToggleFrontCamera] = useState(false);
  const device = useCameraDevice(toggleFrontCamera ? "front" : "back");
  const [showPhoto, setShowPhoto] = useState(false);
  const [showGallery, setShowGallery] = useState(true);
  const [isRecording, setIsRecording] = useState(false);
  const [toggleVideo, setToggleVideo] = useState(false);
  const [videoTimer, setVideoTimer] = useState(0);
  const [photoUri, setPhotoUri] = useState(null);
  const [gallery, setGallery] = useState([]);
  const [turnOnFlash, setTurnOnFlash] = useState<"off" | "on">("off");
  useEffect(() => {
    const getGallery = () => {
      CameraRoll.getPhotos({
        first: 1,
        assetType: "Photos",
      })
        .then((photo) => {
          setGallery(photo.edges);
        })
        .catch((err) => {
          console.error(err);
        });
    };
    getGallery();
    return () => getGallery();
  }, [photos, photoUri]);
  useEffect(() => {
    let timer: NodeJS.Timeout;
    if (showPhoto) {
      timer = setTimeout(() => {
        setShowPhoto(false);
      }, 3000);
    }
    return () => {
      clearTimeout(timer);
    };
  }, [showPhoto]);
  const takePhoto = async () => {
    if (Platform.OS === "android" && !(await hasAndroidPermission())) {
      return;
    }
    if (cameraRef.current) {
      const { path } = await cameraRef.current.takePhoto({
        flash: turnOnFlash,
        enableShutterSound: true,
      });
      await CameraRoll.saveAsset(`file://${path}`, {
        type: "photo",
      });
      setPhotoUri(path);
      setShowPhoto(true);
    } else {
      Alert.alert("Camera not ready");
    }
  };
  const startRecording = async () => {
    if (cameraRef.current) {
      try {
        if (isRecording) {
          await cameraRef.current.stopRecording();
          setIsRecording(false);
          if (timerRef.current) {
            clearInterval(timerRef.current);
            timerRef.current = null;
          }
        } else {
          cameraRef.current.startRecording({
            flash: turnOnFlash,
            onRecordingError: (error) =>
              console.error("Recording error:", error),
            onRecordingFinished: async ({ path }) => {
              try {
                await CameraRoll.saveAsset(`file://${path}`, { type: "video" });
              } catch (error) {
                console.error("Error saving video:", error);
              }
              if (timerRef.current) {
                clearInterval(timerRef.current);
              }
            },
          });
          setIsRecording(true);
          setVideoTimer(0);
          timerRef.current = setInterval(() => {
            setVideoTimer((prevTime) => prevTime + 1);
          }, 1000);
        }
      } catch (error) {
        console.error("Error during recording:", error);
        setIsRecording(false);
        if (timerRef.current) {
          clearInterval(timerRef.current);
        }
      }
    }
  };
  const toggleFlash = () => {
    setTurnOnFlash((prev) => (prev === "on" ? "off" : "on"));
  };
  const toggleCamera = () => {
    setToggleFrontCamera((prev) => !prev);
  };
  const startVideoTimer = (timeInSeconds: number) => {
    const minutes = Math.floor(timeInSeconds / 60);
    const seconds = timeInSeconds % 60;
    return `${String(minutes).padStart(2, "0")}:${String(seconds).padStart(
      2,
      "0"
    )}`;
  };
  if (!hasPermission || !device) {
    return (
      
        
          Camera App requires permission.
        
        
          Grant Permission
        
      
    );
  }
  return (
    
      {showGallery && photos.edges.length > 0 ? (
        
          
             setShowGallery(false)}
              style={{
                margin: 10,
                marginVertical: 20,
                padding: 10,
                borderRadius: 5,
                backgroundColor: "#ccc",
                alignItems: "flex-end",
                marginLeft: "auto",
              }}
            >
              
                Back to Camera
              
            
            
              {photos.edges.map((item, index) => {
                return (
                  
                );
              })}
            
          
        
      ) : (
        <>
          
          {showPhoto && photoUri && (
            
              
            
          )}
          {isRecording && (
            {startVideoTimer(videoTimer)}
          )}
          
            
          
          
             setToggleVideo(false)}>
              
                Photo
              
            
             setToggleVideo(true)}>
              
                Video
              
            
          
          
            {gallery.length > 0 ? (
              gallery.splice(0, 1).map((item, index) => {
                return (
                   {
                      getPhotos();
                      setShowGallery(true);
                    }}
                    key={index}
                  >
                    
                  
                );
              })
            ) : (
              
                
              
            )}
            {toggleVideo ? (
              
                
              
            ) : (
              
                
              
            )}
            
              
            
          
        
      )}
    
  );
}

从上面的代码中,首先,我们使用 `useCameraPermission` 检查用户是否有权访问相机。如果没有授予权限,则应用程序会提示用户授予相机访问权限。

接下来,我们有以下状态来处理我们的功能:

  • toggleFrontCamera:我们使用此状态在前置和后置摄像头之间切换
  • showPhoto:这充当我们的预览,即用于显示相机上最后拍摄的照片
  • showGallery:此状态在相机视图和图库查看器之间切换
  • isRecording:用于跟踪视频当前是否正在录制
  • toggleVideo:在视频和相机模式之间切换
  • videoTimer:用于跟踪视频录制,即视频录制的时间长度
  • photoUri:此状态用于存储最后拍摄的照片的URI路径
  • 画廊:此状态存储获取的画廊照片以供显示
  • turnOnFlash:用于打开或关闭相机闪光灯
  • 当用户在拍照模式下点击拍照按钮时,会使用 `cameraRef.current.takePhoto()` 拍摄一张照片,然后使用 `CameraRoll.saveAsset()` 保存到图库。照片的 URI 存储在 `photoUri` 状态中,并且 `showPhoto` 状态设置为 true 以临时显示它。

    类似地,在视频模式下,按下视频按钮时使用 `cameraRef.current.startRecording()` 开始录制,计时器开始计时并通过 `videoTimer` 状态进行跟踪。计时器也显示在屏幕顶部中央。当用户结束视频录制时,视频将保存到图库中,然后清除计时器。

    还有闪光灯控制(`turnOnFlash`)来切换相机闪光灯的`开`和`关`,一个切换相机按钮,使用`toggleFrontCamera`在前置和后置摄像头之间切换,以及左下角的可按图像,可将用户带到图库查看器。

    QR 码扫描仪

    使用相机的另一个好用例是实现二维码和条形码扫描。对于此实现,“react-native-vision-camera”有一个扫描仪实例,可用于检测代码。

    首先,在 `app.json` 文件中启用代码扫描器:

    {
      "name": "my app",
      "plugins": [
        [
          "react-native-vision-camera",
          {
            // ...
            "enableCodeScanner": true
          }
        ]
      ]
    }

    接下来,让我们创建实例并将其传递给我们的相机流:

    const codeScanner = useCodeScanner({
        codeTypes: ["qr", "ean-13", "upc-a"],
        onCodeScanned: (codes) => {
          for (const code of codes) {
            console.log(`Scanned ${code.type}: ${code.frame}, ${code.value}`);
          }
        },
      });
    
    return (
     
    )

    尝试扫描任何条形码,您将获得准确的值。

    人脸检测

    另一个很好的用例是人脸检测。能够在图像和视频中识别和定位人脸是一个现实世界的场景,广泛应用于从摄影到安全系统等各个领域。

    对于人脸检测,我们需要 `react-native-vision-camera` 的帧处理器。在此之前,我们将安装 Worklets,它将帮助我们在单独的线程上运行 JavaScript 函数。

    要安装 Worklets 和我们将用于人脸检测的包,请运行以下命令:

    npm i react-native-worklets-core vision-camera-face-detector

    接下来,在您的 `babel.config.js` 文件中,添加下面的插件。如果您使用 Expo,请创建文件并添加插件,如下所示:

    module.exports = {
      plugins: [
        ['react-native-worklets-core/plugin'],
      ],
    }

    要实现人脸检测,请参阅以下代码:

    import { StyleSheet } from "react-native";
    import {
      Camera,
      useCameraDevice,
      useFrameProcessor,
    } from "react-native-vision-camera";
    import { Worklets } from "react-native-worklets-core";
    import { scanFaces } from "vision-camera-face-detector";
    
    export default function App() {
      const device = useCameraDevice("back");
    
      const faceDetectionProcessor = useFrameProcessor((frame) => {
        "worklet";
        try {
          const detectedFaces = scanFaces(frame);
          if (detectedFaces) {
            Worklets.createRunOnJS((detectedFaces) => {
              console.log(detectedFaces);
            });
          }
        } catch (error) {
          console.log("Error scanning faces", error);
        }
      }, []);
    
      return (
        <>
          
        
      );
    }

    结论

    相机功能还有其他多种用例,从运动检测器到图像标记器和物体检测。面部识别是一种流行且普遍的用例,其中相机功能可用于验证应用程序。

    您可以在此处找到我们在以上部分中看到的实现的完整源代码。

    如果您想尝试其他用例,在这里您将找到几个可以与 VisionCamera 和相机应用程序集成的插件。

    LogRocket:立即重现 React Native 应用中的问题

    React Native LogRocket Demo

    LogRocket 是一种 React Native 监控解决方案,可帮助您立即重现问题、确定错误的优先级并了解 React Native 应用程序中的性能。

    LogRocket 还可以通过向您展示用户与您的应用的互动方式来帮助您提高转化率和产品使用率。LogRocket 的产品分析功能可以揭示用户未完成特定流程或未采用新功能的原因。

    开始主动监控您的 React Native 应用程序——免费试用 LogRocket。