Buckets:

|
download
raw
137 kB

Title: Towards Secure and Usable 3D Assets: A Novel Framework for Automatic Visible Watermarking

URL Source: https://arxiv.org/html/2409.00314

Published Time: Thu, 19 Sep 2024 00:09:09 GMT

Markdown Content: Gursimran Singh, Tianxi Hu∗, Mohammad Akbari∗, Qiang Tang, Yong Zhang

Huawei Technologies Canada Co. Ltd.

{gursimran.singh1, cindy.hu1, mohammad.akbari, qiang.tang, yong.zhang3}@huawei.com

Abstract

3D models, particularly AI-generated ones, have witnessed a recent surge across various industries such as entertainment. Hence, there is an alarming need to protect the intellectual property and avoid the misuse of these valuable assets. As a viable solution to address these concerns, we rigorously define the novel task of automated 3D visible watermarking in terms of two competing aspects: watermark quality and asset utility. Moreover, we propose a method of embedding visible watermarks that automatically determines the right location, orientation, and number of watermarks to be placed on arbitrary 3D assets for high watermark quality and asset utility. Our method is based on a novel rigid-body optimization that uses back-propagation to automatically learn transforms for ideal watermark placement. In addition, we propose a novel curvature-matching method for fusing the watermark into the 3D model that further improves readability and security. Finally, we provide a detailed experimental analysis on two benchmark 3D datasets validating the superior performance of our approach in comparison to baselines. Code and demo are available here1 1 1https://developer.huaweicloud.com/develop/aigallery/notebook/detail?id=15adbaaa-2583-4ec3-804a-61c29f001e03.

**footnotetext: Equal contribution. 1 Introduction

The increasing demand for 3D assets across industries like entertainment [17, 22], augmented reality, and virtual reality [23, 30] has driven the adoption of efficient, and scalable methods for creating and distributing content. Generative AI (GenAI) has revolutionized automated 3D content creation, while commercial marketplaces have enhanced distribution networks. This evolution necessitates robust mechanisms to prevent misuse, validate ownership, and protect intellectual property (IP).

Governments and law enforcement agencies are concerned about potential misuse by malicious actors who may exploit automated AI tools to produce controversial 3D content at scale. Such content could be used for spreading misinformation, influencing public opinion, or provoking social unrest. In response, governments (USA [14], China [1], and Europe [19]) are exploring regulations that would mandate GenAI services to embed and publicly disclose the origin of their generated content. In addition, in 3D data marketplaces, sellers must present their 3D assets for potential customers to preview using built-in 3D viewers. However, this practice can be exploited by malicious individuals who download these assets under the guise of previewing them, resulting in significant financial losses for the merchants. Therefore, safeguarding the intellectual property (IP) of these valuable assets within 3D data marketplaces is crucial to prevent unauthorized distribution and ensure fair compensation for the creators.

Digital watermarking is a key technology for copyright protection, source tracking, and authentication [26, 25, 28]. The majority of existing watermarking work in 3D focuses on invisible methods [3, 4, 29, 31, 33]. However, such methods have several drawbacks that limit their practicality in the scenarios discussed above. For instance, in the regulation scenario, it is essential for the general public to identify the asset’s origin visually, without requiring specialized tools or knowledge [14, 1]. In contrast, invisible methods rely on specific, often publicly inaccessible, extractors that complicates watermark detection for non-experts. Furthermore, when 3D models are employed in downstream applications like video games or animations, the original watermarked assets become inaccessible for extraction. Consequently, extraction must rely on analyzing 2D visuals that undergo significant alterations, such as changes in lighting and texture, which can hinder successful extraction [34]. Lastly, invisible watermarks, designed to be hidden and imperceptible, do not sufficiently deter unauthorized use in a preemptive manner. Instead, they are more suited as a remedy after the alleged infringement has already happened. In the digital marketplace, copyright infringement can be easily committed through the replication of digital assets. However, identifying and proving such infringement is challenging, leading to complex and costly legal actions.

In this work, we propose the novel task of automated 3D visible watermarking as a viable alternative. The objective is to embed a copyright mark, such as a visible logo or text message, across different areas of the 3D model surface. These visible watermarks are designed to overcome the limitations associated with invisible methods discussed previously. Specifically, in regulatory contexts, smaller visible watermarks can be strategically positioned in less intrusive areas of 3D models. This approach aims to maintain the model’s functionality while using watermarks as stamps to track provenance information. Thus, the general public can readily interpret visible watermarks to discern whether the content is AI-generated or created by humans. On the other hand, for merchandising scenarios, we can strategically place large and prominent visible watermarks at various locations on the asset that act as a strong deterrent against unauthorized use. This encourages interested customers to purchase the unwatermarked model, ensuring fair compensation for the asset owner. Hence, this approach offers proactive protection of intellectual property compared to relying solely on legal action after misuse.

Automated 3D visible watermarking is a challenging task due to several reasons. First, the algorithm needs to automatically identify the most suitable locations (specific coordinates) on the 3D model surface to embed watermarks. In other words, the algorithm must choose locations that boost the visibility of watermarks for high security, while preserving the most salient and prominent features of the original model. Second, the algorithm must guarantee that the entire watermark is well-aligned along the model’s surface curvature with no component floating, as such parts can be easily removed using an isolated parts detection algorithm, thereby compromising the security of the watermarks. Finally, the algorithm must fuse the watermarks into the 3D geometry rather than merely inscribing them into the model texture. This makes it an irreversible process where the watermarks are very hard and expensive to remove.

To address these challenges, we introduce an end-to-end pipeline for automatically embedding visible watermarks on arbitrary 3D models. Our solution proposes a novel gradient-based optimization method that generates a large number of candidate watermarks placed throughout the surface of the model. Having generated these diverse candidates, we define a filtering-based algorithm to pick the final set of watermarks based on the specific utility and security requirements. Finally, we propose a novel Boolean-based mesh merging to fuse watermarks into the geometry while matching the local curvature of the underlying surfaces. The major contributions of this work are as follows:

  • •We define the novel task of automatic 3D visible watermarking, where the main goal is to determine the number, orientation, and location of watermarks to achieve the best watermark quality and best asset quality. We also provide rigorous definitions for various aspects of both watermark quality and asset quality.
  • •We propose a novel end-to-end pipeline as a solution for the task of automated 3D visible watermarking. Our solution is based on a gradient-based optimization to determine the location and rotation transforms for the best orientation. Additionally, we propose a novel curve-matching fusion for enforcing the watermarks to follow the local curvature of the underlying surface.
  • •We propose a holistic evaluation benchmark for 3D visible watermarking for future research. We propose practical metrics for various aspects of watermark and asset quality. Further, we provide detailed experiments on three 3D datasets to demonstrate the superiority of our approach in comparison to the baselines.

2 Related Works

Watermarking 3D models is a problem of great interest [2, 3, 4, 7, 33]. While 3D invisible watermarking [3, 4, 29, 31, 33] is well-established, the field of 3D visible watermarking remains in its infancy. Traditional approaches [2, 6, 24] use edge subdivisions to carve watermark characters on a smooth surface of the mesh. These approaches give a visual effect of watermark characters, however, they are designed to be only viewed using mesh-editing software that can display edge information. Hence, they are not visible to the naked eye in rendering mode or after printing the 3D model. Another limitation of these methods is that they do not support 3D formats without any edge information (e.g., voxel format). Recently, Li et al. [16] proposed an approach to embed a single 3D watermark on a model using mesh Boolean operations, a technique similar to ours. However, their method has several limitations. First, they do not provide a framework for automatic identification of the locations to put watermarks on the surface of the model. Instead, they only provide some guidelines for humans to select locations manually. Second, their algorithm does not support fixing the orientation of watermarks, which is critical to ensure security against removal attacks. Finally, the specific Boolean merging operations proposed in this work do not match the curvature of the underlying surface, leading to poor visual quality of results. On the other hand, our method automatically selects and places watermarks on the surface of the 3D mesh with a focus on both watermark and asset quality.

3 Problem Definition

At a high level, the task of 3D visible watermarking is to automatically generate the watermarked (output) model by embedding multiple watermarks on the target (input) model. Formally, the inputs to the task include: 1) Target 3D mesh M⁢(V,F)𝑀 𝑉 𝐹 M(V,F)italic_M ( italic_V , italic_F ) consisting of a set of vertices V={v i|v i∈ℝ 3}𝑉 conditional-set subscript 𝑣 𝑖 subscript 𝑣 𝑖 superscript ℝ 3 V={v_{i}|v_{i}\in\mathbb{R}^{3}}italic_V = { italic_v start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT | italic_v start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT }, a set of faces F={(v i,v j,v k)|v i,v j,v k∈V}𝐹 conditional-set subscript 𝑣 𝑖 subscript 𝑣 𝑗 subscript 𝑣 𝑘 subscript 𝑣 𝑖 subscript 𝑣 𝑗 subscript 𝑣 𝑘 𝑉 F={(v_{i},v_{j},v_{k})|v_{i},v_{j},v_{k}\in V}italic_F = { ( italic_v start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_v start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT , italic_v start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) | italic_v start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_v start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT , italic_v start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ∈ italic_V }; 2) watermark text; and 3) algorithm parameters such as font, thickness, and size of watermarks.

The output is a 3D mesh M′⁢(V′,F′)superscript 𝑀′superscript 𝑉′superscript 𝐹′M^{\prime}(V^{\prime},F^{\prime})italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_V start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , italic_F start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) with a variable number of watermarks, denoted by H f subscript 𝐻 𝑓 H_{f}italic_H start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT, embedded into its surface at multiple locations. The new set of vertices V′superscript 𝑉′V^{\prime}italic_V start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT and faces F′superscript 𝐹′F^{\prime}italic_F start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT represent the modified geometry where the watermarks are an inseparable part of the watermarked mesh M′superscript 𝑀′M^{\prime}italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT. Additionally, let’s denote the sub mesh corresponding to the i t⁢h superscript 𝑖 𝑡 ℎ i^{th}italic_i start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT embedded watermark as W i⊂M′subscript 𝑊 𝑖 superscript 𝑀′W_{i}\subset M^{\prime}italic_W start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ⊂ italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT and an assumed minimum volume bounding box around it by B i subscript 𝐵 𝑖 B_{i}italic_B start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT.

The central problem in 3D visible watermarking is to determine the number, location, and orientation of the watermarks to be placed on the target model’s surface. For instance, having too many watermarks can obscure crucial details of the model, making it unusable for downstream applications. Conversely, too few watermarks may compromise security, allowing adversaries to exploit views without watermarks to create 2D renderings illegitimately. Similarly, the location and orientation of the watermarks have security-vs-utility consequences that need to be kept in mind during the watermarking process. To make it concrete, we group these security and utility consequences into two main groups - watermark quality and asset utility, and discuss them in detail in the following sub-sections.

3.1 Watermark Quality

Watermark quality encompasses various aspects of the security of the watermarked asset. In order to have high security, the placed watermarks must be 1) hard to remove by an adversary, 2) easily readable by a human eye, and 3) viewable from multiple camera angles. To make the problem concrete, we define these individual aspects into quantifiable and measurable metrics called watermark placement and watermark visibility.

Watermark Placement: The principle is that watermarks should be precisely aligned on the surface of the target mesh to enhance security, readability, and visual quality. Misaligned watermarks, where characters are embedded within the mesh surface, are undesirable for several reasons. Firstly, isolated characters can be easily detected and removed by automated algorithms, compromising the security of the watermark. Secondly, embedded parts make it difficult to fully read the watermark, affecting its readability. Lastly, watermarks that do not conform to the mesh surface appear visually less appealing to human observers.

Definition 1. Watermark placement is quantified as the average proportion of the intersection area between the original mesh M 𝑀 M italic_M and each watermark mesh W i superscript 𝑊 𝑖 W^{i}italic_W start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT from the set of all watermarks {W i}i=1 H f superscript subscript superscript 𝑊 𝑖 𝑖 1 subscript 𝐻 𝑓{W^{i}}{i=1}^{H{f}}{ italic_W start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT } start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_H start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT end_POSTSUPERSCRIPT. This is computed as:

𝒫⁢(M,M′)=1 H f⁢∑i=1 H f max⁡{𝒜 n^i⁢(M∩W i)𝒜 n^i⁢(W i),1}𝒫 𝑀 superscript 𝑀′1 subscript 𝐻 𝑓 superscript subscript 𝑖 1 subscript 𝐻 𝑓 subscript 𝒜 superscript^𝑛 𝑖 𝑀 superscript 𝑊 𝑖 subscript 𝒜 superscript^𝑛 𝑖 superscript 𝑊 𝑖 1\mathcal{P}(M,M^{\prime})=\frac{1}{H_{f}}\sum_{i=1}^{H_{f}}\max\left{\frac{% \mathcal{A}{\hat{n}^{i}}(M\cap W^{i})}{\mathcal{A}{\hat{n}^{i}}(W^{i})},1\right}caligraphic_P ( italic_M , italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) = divide start_ARG 1 end_ARG start_ARG italic_H start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT end_ARG ∑ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_H start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT end_POSTSUPERSCRIPT roman_max { divide start_ARG caligraphic_A start_POSTSUBSCRIPT over^ start_ARG italic_n end_ARG start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ( italic_M ∩ italic_W start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT ) end_ARG start_ARG caligraphic_A start_POSTSUBSCRIPT over^ start_ARG italic_n end_ARG start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ( italic_W start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT ) end_ARG , 1 }(1)

where 𝒜 n^⁢(w)subscript 𝒜^𝑛 𝑤\mathcal{A}_{\hat{n}}(w)caligraphic_A start_POSTSUBSCRIPT over^ start_ARG italic_n end_ARG end_POSTSUBSCRIPT ( italic_w ) represents the projected area of a mesh w 𝑤 w italic_w towards a normal vector n^^𝑛\hat{n}over^ start_ARG italic_n end_ARG. W i⊂M′superscript 𝑊 𝑖 superscript 𝑀′W^{i}\subset M^{\prime}italic_W start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT ⊂ italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT is the i t⁢h superscript 𝑖 𝑡 ℎ i^{th}italic_i start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT watermark mesh and n^i superscript^𝑛 𝑖\hat{n}^{i}over^ start_ARG italic_n end_ARG start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT is the normal vector facing the front side of the corresponding watermark mesh.

Watermark Visibility: It quantifies whether watermarks are clearly visible and readable from various camera angles around the watermarked mesh, which is crucial for enhancing security, as higher visibility prevents adversaries from obtaining renders (2D shots) of the asset without watermarks. Additionally, improved visibility ensures better readability, enabling investigators to easily identify and verify the presence of the watermark from multiple angles.

Definition 2. Consider a set of camera views {c t}t=1 T superscript subscript superscript 𝑐 𝑡 𝑡 1 𝑇{c^{t}}{t=1}^{T}{ italic_c start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT } start_POSTSUBSCRIPT italic_t = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT obtained by randomly rotating a camera around the watermarked object. Assume 𝒦 𝒱⁢(M′,c t)subscript 𝒦 𝒱 superscript 𝑀′superscript 𝑐 𝑡\mathcal{K{V}}(M^{\prime},c^{t})caligraphic_K start_POSTSUBSCRIPT caligraphic_V end_POSTSUBSCRIPT ( italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , italic_c start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT ) as a kernel, where it equals 1 if a watermark on M′superscript 𝑀′M^{\prime}italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT is visible in view c t superscript 𝑐 𝑡 c^{t}italic_c start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT to a human, and 0 otherwise. For a large number T 𝑇 T italic_T of views, we define the watermark visibility V⁢(M′)𝑉 superscript 𝑀′V(M^{\prime})italic_V ( italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) as the proportion of views where at least one watermark is visible:

𝒱⁢(M′)=1 T⁢∑t=1 T 𝒦 𝒱⁢(M′,c t)𝒱 superscript 𝑀′1 𝑇 superscript subscript 𝑡 1 𝑇 subscript 𝒦 𝒱 superscript 𝑀′superscript 𝑐 𝑡\mathcal{V}(M^{\prime})=\frac{1}{T}\sum_{t=1}^{T}\mathcal{K_{V}}(M^{\prime},c^% {t})caligraphic_V ( italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) = divide start_ARG 1 end_ARG start_ARG italic_T end_ARG ∑ start_POSTSUBSCRIPT italic_t = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT caligraphic_K start_POSTSUBSCRIPT caligraphic_V end_POSTSUBSCRIPT ( italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , italic_c start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT )(2)

Image 1: Refer to caption

Figure 1: The overall framework of our proposed automatic 3D visible watermarking.

3.2 Asset Utility

Asset utility encompasses various factors that affect the usefulness of the watermarked asset across different downstream applications. Given the subjective nature and complexity of defining downstream utility, our focus is primarily on assessing any potential degradation caused by watermarking. Specifically, we evaluate how the watermarking process impacts the asset in terms of geometry, saliency, and semantic integrity. For example, adding a greater number or larger size of watermarks to an asset results in more significant degradation. Similarly, placing watermarks in highly noticeable and salient areas of the asset can diminish its utility more than choosing less conspicuous, flat areas. Accordingly, we define specific criteria for assessing asset utility: geometry similarity, saliency preservation, and semantic preservation.

Geometry Similarity: This metric quantifies alterations in the appearance-level 3D geometry of the watermarked asset M′superscript 𝑀′M^{\prime}italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT relative to the original asset M 𝑀 M italic_M. It focuses on detecting changes such as surface geometry modifications, variations in local curvature, and occlusions of surface features resulting from the watermarking process.

Definition 3. Let K G⁢(M′,M,d⁢r)subscript 𝐾 𝐺 superscript 𝑀′𝑀 𝑑 𝑟 K_{G}(M^{\prime},M,dr)italic_K start_POSTSUBSCRIPT italic_G end_POSTSUBSCRIPT ( italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , italic_M , italic_d italic_r ) denote a kernel quantifying the local 3D geometry similarity between a small region d⁢r 𝑑 𝑟 dr italic_d italic_r of the target mesh M 𝑀 M italic_M and the corresponding region of the watermarked mesh M′superscript 𝑀′M^{\prime}italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT. The geometry similarity is defined as:

𝒢⁢(M,M′)=∫r 𝒦 𝒢⁢(M,M′,d⁢r)⋅𝑑 r 𝒢 𝑀 superscript 𝑀′subscript 𝑟⋅subscript 𝒦 𝒢 𝑀 superscript 𝑀′𝑑 𝑟 differential-d 𝑟\mathcal{G}(M,M^{\prime})=\int_{r}\mathcal{K_{G}}(M,M^{\prime},dr)\cdot dr caligraphic_G ( italic_M , italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) = ∫ start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT caligraphic_K start_POSTSUBSCRIPT caligraphic_G end_POSTSUBSCRIPT ( italic_M , italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , italic_d italic_r ) ⋅ italic_d italic_r(3)

Saliency Preservation: This criterion quantifies whether the salient and highly noticeable features of the original mesh are preserved during watermarking. For instance, in the context of a cat object, critical features like the face, ears, and paws must be preserved to ensure the continued usefulness of the model for downstream applications.

Definition 4. Let 𝒦 𝒮⁢(r)subscript 𝒦 𝒮 𝑟\mathcal{K_{S}}(r)caligraphic_K start_POSTSUBSCRIPT caligraphic_S end_POSTSUBSCRIPT ( italic_r ) denote a kernel for computing an estimate of the average saliency of a local region r 𝑟 r italic_r in the original model M 𝑀 M italic_M. Using a threshold τ s subscript 𝜏 𝑠\tau_{s}italic_τ start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT determined by Otsu’s method [32], we define the saliency retention as:

𝒮⁢(M,M′)=1 H f⁢∑i=1 H f 𝕀⁢(𝒦 𝒮⁢(M∩B i)<τ s)𝒮 𝑀 superscript 𝑀′1 subscript 𝐻 𝑓 superscript subscript 𝑖 1 subscript 𝐻 𝑓 𝕀 subscript 𝒦 𝒮 𝑀 superscript 𝐵 𝑖 subscript 𝜏 𝑠\mathcal{S}(M,M^{\prime})=\frac{1}{H_{f}}\sum_{i=1}^{H_{f}}\mathbb{I}(\mathcal% {K_{S}}(M\cap B^{i})<\tau_{s})caligraphic_S ( italic_M , italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) = divide start_ARG 1 end_ARG start_ARG italic_H start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT end_ARG ∑ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_H start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT end_POSTSUPERSCRIPT blackboard_I ( caligraphic_K start_POSTSUBSCRIPT caligraphic_S end_POSTSUBSCRIPT ( italic_M ∩ italic_B start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT ) < italic_τ start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT )(4)

where 𝕀⁢(c)𝕀 𝑐\mathbb{I}(c)blackboard_I ( italic_c ) is an indicator function whose value is 1 if c 𝑐 c italic_c is true, otherwise 0.

Semantics Preservation: This criterion addresses the potential degradation in high-level semantic concepts caused by watermarking. To maintain high semantics preservation, watermarks should avoid locations that could alter or obscure the semantic understanding of the object. For example, adding numerous watermarks to the face of an animal might make it challenging to discern whether it semantically represents a cat or a dog.

Definition 5. Let {c o t}t=1 T superscript subscript superscript subscript 𝑐 𝑜 𝑡 𝑡 1 𝑇{c_{o}^{t}}{t=1}^{T}{ italic_c start_POSTSUBSCRIPT italic_o end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT } start_POSTSUBSCRIPT italic_t = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT and {c w t}t=1 T superscript subscript superscript subscript 𝑐 𝑤 𝑡 𝑡 1 𝑇{c{w}^{t}}{t=1}^{T}{ italic_c start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT } start_POSTSUBSCRIPT italic_t = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT denote sets of corresponding camera views obtained by repeatedly and randomly rotating a camera around the original and watermarked objects, respectively. Assume 𝒦 ℱ⁢(c o t,c w t)subscript 𝒦 ℱ superscript subscript 𝑐 𝑜 𝑡 superscript subscript 𝑐 𝑤 𝑡\mathcal{K{F}}(c_{o}^{t},c_{w}^{t})caligraphic_K start_POSTSUBSCRIPT caligraphic_F end_POSTSUBSCRIPT ( italic_c start_POSTSUBSCRIPT italic_o end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT , italic_c start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT ) as a kernel for estimating the semantic similarity between corresponding views c o t superscript subscript 𝑐 𝑜 𝑡 c_{o}^{t}italic_c start_POSTSUBSCRIPT italic_o end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT and c w t superscript subscript 𝑐 𝑤 𝑡 c_{w}^{t}italic_c start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT. For a large value of T 𝑇 T italic_T, semantics preservation is defined as:

ℱ⁢(M,M′)=1 T⁢∑t=1 T 𝒦 ℱ⁢(c o t,c w t)ℱ 𝑀 superscript 𝑀′1 𝑇 superscript subscript 𝑡 1 𝑇 subscript 𝒦 ℱ superscript subscript 𝑐 𝑜 𝑡 superscript subscript 𝑐 𝑤 𝑡\mathcal{F}(M,M^{\prime})=\frac{1}{T}\sum_{t=1}^{T}\mathcal{K_{F}}(c_{o}^{t},c% _{w}^{t})\vspace{-5pt}caligraphic_F ( italic_M , italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) = divide start_ARG 1 end_ARG start_ARG italic_T end_ARG ∑ start_POSTSUBSCRIPT italic_t = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT caligraphic_K start_POSTSUBSCRIPT caligraphic_F end_POSTSUBSCRIPT ( italic_c start_POSTSUBSCRIPT italic_o end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT , italic_c start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT )(5)

3.3 Watermark Quality vs. Asset Utility

The ultimate goal of the 3D visible watermarking task is to obtain high asset utility and watermarking quality. Ideally, the asset utility before and after watermarking should be approximately the same. At the same time, we would like high-quality watermarks that are visible from all angles to the human eye. However, watermarking often involves a trade-off between asset utility and watermark quality. The goal of this work is to improve the fundamental trade-off by improving both aspects of asset utility and watermark quality together.

4 Method

In this section, we outline our proposed solution for automated 3D visible watermarking. Fig.1 presents a high-level visualization of the entire process. Our pipeline is composed of four main modules: initialization, finetuning, filtering, and embossing. Initially, we generate a large number of candidate boxes, that serve as placeholders for the actual 3D watermarks, on the surface of the target model. Next, the finetuning module adjusts the position and orientation of these boxes to ensure they flow along the surface of the mesh rather than protruding out or into the surface. Then, the filtering module selects the final subset of watermarks with an aim to fulfill the high watermark quality and high asset utility requirements. Finally, the embossing module integrates the 3D text watermark into the target model’s surface, ensuring the watermarks conform to the local surface curvature for optimal visual appeal.

4.1 Initialization

We begin by sampling a set of H 𝐻 H italic_H equidistant points on the surface of the target model. At each sampled point i 𝑖 i italic_i, we create a 3D rectangular box denoted as B i⁢(V i,F i)subscript 𝐵 𝑖 subscript 𝑉 𝑖 subscript 𝐹 𝑖 B_{i}(V_{i},F_{i})italic_B start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_V start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_F start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) and orient it to face along the corresponding surface normal. These boxes collectively serve as placeholders that will later be substituted with 3D text watermarks. For a detailed description of the initialization algorithm, please consult the supplementary materials.

4.2 Finetuning

The boxes generated in the initialization phase need to be fine-tuned for optimal watermark placement. Specifically, they need to be moved to appropriate locations and rotated to properly align along the surface of the model. Since the required translation and rotation are unknown in advance, we propose a novel rigid-transform optimization approach to estimate the necessary transforms automatically.

To accomplish this, we introduce rotation parameters (θ i α,θ i β,θ i γ)subscript superscript 𝜃 𝛼 𝑖 subscript superscript 𝜃 𝛽 𝑖 subscript superscript 𝜃 𝛾 𝑖(\theta^{\alpha}{i},\theta^{\beta}{i},\theta^{\gamma}{i})( italic_θ start_POSTSUPERSCRIPT italic_α end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_θ start_POSTSUPERSCRIPT italic_β end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_θ start_POSTSUPERSCRIPT italic_γ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) and translation parameters (θ i X,θ i Y,θ i Z)subscript superscript 𝜃 𝑋 𝑖 subscript superscript 𝜃 𝑌 𝑖 subscript superscript 𝜃 𝑍 𝑖(\theta^{X}{i},\theta^{Y}{i},\theta^{Z}{i})( italic_θ start_POSTSUPERSCRIPT italic_X end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_θ start_POSTSUPERSCRIPT italic_Y end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_θ start_POSTSUPERSCRIPT italic_Z end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ). Here, the rotation parameters specify the angles of rotation, while the translation parameters indicate the displacements along the X, Y, and Z axes, respectively. Using these parameters, we apply translation and rotation operations to each candidate box. For the i t⁢h superscript 𝑖 𝑡 ℎ i^{th}italic_i start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT candidate box, the transformed vertices Vi subscript𝑉 𝑖\tilde{V}_{i}over~ start_ARG italic_V end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT are obtained as follows:

Vi=T C i X,C i Y,C i Z⋅R θ i α,θ i β,θ i γ⋅−T C i X,C i Y,C i Z⋅V¯i\tilde{V}{i}=T{C^{X}{i},C^{Y}{i},C^{Z}{i}}\cdot R{\theta^{\alpha}{i},% \theta^{\beta}{i},\theta^{\gamma}{i}}\cdot-T{C^{X}{i},C^{Y}{i},C^{Z}{i}}% \cdot\bar{V}{i}over start_ARG italic_V end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = italic_T start_POSTSUBSCRIPT italic_C start_POSTSUPERSCRIPT italic_X end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_C start_POSTSUPERSCRIPT italic_Y end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_C start_POSTSUPERSCRIPT italic_Z end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT ⋅ italic_R start_POSTSUBSCRIPT italic_θ start_POSTSUPERSCRIPT italic_α end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_θ start_POSTSUPERSCRIPT italic_β end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_θ start_POSTSUPERSCRIPT italic_γ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT ⋅ - italic_T start_POSTSUBSCRIPT italic_C start_POSTSUPERSCRIPT italic_X end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_C start_POSTSUPERSCRIPT italic_Y end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_C start_POSTSUPERSCRIPT italic_Z end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT ⋅ over¯ start_ARG italic_V end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT(6)

where⁢V¯i=T θ i X,θ i Y,θ i Z⋅V i,where subscript¯𝑉 𝑖⋅subscript 𝑇 subscript superscript 𝜃 𝑋 𝑖 subscript superscript 𝜃 𝑌 𝑖 subscript superscript 𝜃 𝑍 𝑖 subscript 𝑉 𝑖\text{where }\bar{V}{i}=T{\theta^{X}{i},\theta^{Y}{i},\theta^{Z}_{i}}\cdot V% _{i},where over¯ start_ARG italic_V end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = italic_T start_POSTSUBSCRIPT italic_θ start_POSTSUPERSCRIPT italic_X end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_θ start_POSTSUPERSCRIPT italic_Y end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_θ start_POSTSUPERSCRIPT italic_Z end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT ⋅ italic_V start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ,(7)

where Eq.7 represents the parameterized translation step of moving the vertices of i t⁢h superscript 𝑖 𝑡 ℎ i^{th}italic_i start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT bounding box V i subscript 𝑉 𝑖 V_{i}italic_V start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT using the translation matrix T θ i X,θ i Y,θ i Z∈ℝ 3×3 subscript 𝑇 subscript superscript 𝜃 𝑋 𝑖 subscript superscript 𝜃 𝑌 𝑖 subscript superscript 𝜃 𝑍 𝑖 superscript ℝ 3 3 T_{\theta^{X}{i},\theta^{Y}{i},\theta^{Z}{i}}\in\mathbb{R}^{3\times 3}italic_T start_POSTSUBSCRIPT italic_θ start_POSTSUPERSCRIPT italic_X end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_θ start_POSTSUPERSCRIPT italic_Y end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_θ start_POSTSUPERSCRIPT italic_Z end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT 3 × 3 end_POSTSUPERSCRIPT. Eq.6 represents the parameterized rotation of the box around its centroid using the rotation matrix R θ i α,θ i β,θ i γ∈ℝ 3×3 subscript 𝑅 subscript superscript 𝜃 𝛼 𝑖 subscript superscript 𝜃 𝛽 𝑖 subscript superscript 𝜃 𝛾 𝑖 superscript ℝ 3 3 R{\theta^{\alpha}{i},\theta^{\beta}{i},\theta^{\gamma}{i}}\in\mathbb{R}^{3% \times 3}italic_R start_POSTSUBSCRIPT italic_θ start_POSTSUPERSCRIPT italic_α end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_θ start_POSTSUPERSCRIPT italic_β end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_θ start_POSTSUPERSCRIPT italic_γ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT 3 × 3 end_POSTSUPERSCRIPT. Note, the translation matrix −T C i X,C i Y,C i Z subscript 𝑇 subscript superscript 𝐶 𝑋 𝑖 subscript superscript 𝐶 𝑌 𝑖 subscript superscript 𝐶 𝑍 𝑖-T{C^{X}{i},C^{Y}{i},C^{Z}{i}}- italic_T start_POSTSUBSCRIPT italic_C start_POSTSUPERSCRIPT italic_X end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_C start_POSTSUPERSCRIPT italic_Y end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_C start_POSTSUPERSCRIPT italic_Z end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT is required to temporarily move the box at origin (0, 0, 0), which is eventually transported back to its previous place (C i X,C i Y,C i Z)subscript superscript 𝐶 𝑋 𝑖 subscript superscript 𝐶 𝑌 𝑖 subscript superscript 𝐶 𝑍 𝑖(C^{X}{i},C^{Y}{i},C^{Z}{i})( italic_C start_POSTSUPERSCRIPT italic_X end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_C start_POSTSUPERSCRIPT italic_Y end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_C start_POSTSUPERSCRIPT italic_Z end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) determined by Eq.7. Moving the box temporarily to the origin is required to guarantee that the box is rotating around its centroid (C i X,C i Y,C i Z)subscript superscript 𝐶 𝑋 𝑖 subscript superscript 𝐶 𝑌 𝑖 subscript superscript 𝐶 𝑍 𝑖(C^{X}{i},C^{Y}{i},C^{Z}_{i})( italic_C start_POSTSUPERSCRIPT italic_X end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_C start_POSTSUPERSCRIPT italic_Y end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_C start_POSTSUPERSCRIPT italic_Z end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) and not an arbitrary point.

Finally, we define the optimization objective as follows:

arg⁡min{θ i α,θ i β,θ i γ,θ i X,θ i Y,θ i Z}i=1 H⁢1 H⁢∑i=1 H ℒ⁢(Vi,M),superscript subscript subscript superscript 𝜃 𝛼 𝑖 subscript superscript 𝜃 𝛽 𝑖 subscript superscript 𝜃 𝛾 𝑖 subscript superscript 𝜃 𝑋 𝑖 subscript superscript 𝜃 𝑌 𝑖 subscript superscript 𝜃 𝑍 𝑖 𝑖 1 𝐻 1 𝐻 superscript subscript 𝑖 1 𝐻 ℒ subscript𝑉 𝑖 𝑀\underset{{{\theta^{\alpha}{i},\theta^{\beta}{i},\theta^{\gamma}{i},\theta% ^{X}{i},\theta^{Y}{i},\theta^{Z}{i}}{i=1}^{H}}}{\arg\min}\frac{1}{H}\sum% {i=1}^{H}\mathcal{L}(\tilde{V}_{i},M),start_UNDERACCENT { italic_θ start_POSTSUPERSCRIPT italic_α end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_θ start_POSTSUPERSCRIPT italic_β end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_θ start_POSTSUPERSCRIPT italic_γ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_θ start_POSTSUPERSCRIPT italic_X end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_θ start_POSTSUPERSCRIPT italic_Y end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_θ start_POSTSUPERSCRIPT italic_Z end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT end_UNDERACCENT start_ARG roman_arg roman_min end_ARG divide start_ARG 1 end_ARG start_ARG italic_H end_ARG ∑ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT caligraphic_L ( over~ start_ARG italic_V end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_M ) ,(8)

where ℒ ℒ\mathcal{L}caligraphic_L is the loss function defined in the next section.

Loss Function. In essence, it provides an assessment of the discrepancy in alignment between the box Vi subscript𝑉 𝑖\tilde{V}_{i}over~ start_ARG italic_V end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and the mesh surface M 𝑀 M italic_M. Ideally, the mesh surface should intersect the box at its midpoint, effectively halving it. This alignment guarantees that the characters of the embedded watermark remain parallel to the surface, ensuring optimal readability and security.

Let t 1,t 2,t 3,t 4 subscript 𝑡 1 subscript 𝑡 2 subscript 𝑡 3 subscript 𝑡 4 t_{1},t_{2},t_{3},t_{4}italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_t start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , italic_t start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT , italic_t start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT denote consecutive vertices on the front face of the box, and b 1,b 2,b 3,b 4 subscript 𝑏 1 subscript 𝑏 2 subscript 𝑏 3 subscript 𝑏 4 b_{1},b_{2},b_{3},b_{4}italic_b start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_b start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , italic_b start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT , italic_b start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT denote corresponding vertices on the bottom face. This gives us midpoints m 1=b 1+t 1 2,m 2=b 2+t 2 2,m 3=b 3+t 3 2,formulae-sequence subscript 𝑚 1 subscript 𝑏 1 subscript 𝑡 1 2 formulae-sequence subscript 𝑚 2 subscript 𝑏 2 subscript 𝑡 2 2 subscript 𝑚 3 subscript 𝑏 3 subscript 𝑡 3 2 m_{1}=\frac{b_{1}+t_{1}}{2},m_{2}=\frac{b_{2}+t_{2}}{2},m_{3}=\frac{b_{3}+t_{3% }}{2},italic_m start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = divide start_ARG italic_b start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT + italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_ARG start_ARG 2 end_ARG , italic_m start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = divide start_ARG italic_b start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT + italic_t start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_ARG start_ARG 2 end_ARG , italic_m start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT = divide start_ARG italic_b start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT + italic_t start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT end_ARG start_ARG 2 end_ARG , and m 4=b 4+t 4 2 subscript 𝑚 4 subscript 𝑏 4 subscript 𝑡 4 2 m_{4}=\frac{b_{4}+t_{4}}{2}italic_m start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT = divide start_ARG italic_b start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT + italic_t start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT end_ARG start_ARG 2 end_ARG on the lateral faces of the box. Then, we simply sample equidistant points along the line segments defined by points (m 1,m 2),(m 2,m 3),(m 3,m 4),subscript 𝑚 1 subscript 𝑚 2 subscript 𝑚 2 subscript 𝑚 3 subscript 𝑚 3 subscript 𝑚 4(m_{1},m_{2}),(m_{2},m_{3}),(m_{3},m_{4}),( italic_m start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_m start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) , ( italic_m start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , italic_m start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT ) , ( italic_m start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT , italic_m start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT ) , and (m 4,m 1)subscript 𝑚 4 subscript 𝑚 1(m_{4},m_{1})( italic_m start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT , italic_m start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ). We denote all sampled points, including the midpoints, by the set {s i j}j=1 J superscript subscript subscript superscript 𝑠 𝑗 𝑖 𝑗 1 𝐽{s^{j}{i}}{j=1}^{J}{ italic_s start_POSTSUPERSCRIPT italic_j end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_J end_POSTSUPERSCRIPT, where J 𝐽 J italic_J is the total number of points per box. Using this set, we define the loss as the distance between these sampled points and the surface of the mesh:

ℒ⁢(Vi,M)=1 J⁢∑j=1 J 𝒟⁢(s i j,M),ℒ subscript𝑉 𝑖 𝑀 1 𝐽 superscript subscript 𝑗 1 𝐽 𝒟 subscript superscript 𝑠 𝑗 𝑖 𝑀\mathcal{L}(\tilde{V}{i},M)=\frac{1}{J}\sum{j=1}^{J}\mathcal{D}(s^{j}_{i},M),caligraphic_L ( over~ start_ARG italic_V end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_M ) = divide start_ARG 1 end_ARG start_ARG italic_J end_ARG ∑ start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_J end_POSTSUPERSCRIPT caligraphic_D ( italic_s start_POSTSUPERSCRIPT italic_j end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_M ) ,(9)

where 𝒟 𝒟\mathcal{D}caligraphic_D denotes the built-in differential loss in PyTorch3D [27] used to compute distances between point clouds and meshes. Intuitively, minimizing this loss ensures that all points {s i j}j=1 J superscript subscript subscript superscript 𝑠 𝑗 𝑖 𝑗 1 𝐽{s^{j}{i}}{j=1}^{J}{ italic_s start_POSTSUPERSCRIPT italic_j end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_J end_POSTSUPERSCRIPT lie on the mesh surface. This alignment is achieved when the mesh surface passes through the middle of the box, parallel to both the front and back faces, thereby bisecting the box into halves.

Learning. The optimization objective specified in Eq.8 is nonlinear and non-convex. We simply use back-propagation to compute gradients of objective in Eq.9 and obtain the optimized parameters {θ i α∗,θ i β∗,θ i γ∗}i=1 H superscript subscript superscript subscript 𝜃 𝑖 𝛼 superscript subscript 𝜃 𝑖 𝛽 superscript subscript 𝜃 𝑖 𝛾 𝑖 1 𝐻{\overset{}{\theta_{i}^{\alpha}},\overset{}{\theta_{i}^{\beta}},\overset{}% {\theta_{i}^{\gamma}}}_{i=1}^{H}{ over∗ start_ARG italic_θ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_α end_POSTSUPERSCRIPT end_ARG , over∗ start_ARG italic_θ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_β end_POSTSUPERSCRIPT end_ARG , over∗ start_ARG italic_θ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_γ end_POSTSUPERSCRIPT end_ARG } start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT and {θ i X∗,θ i Y∗,θ i Z∗}i=1 H superscript subscript subscript superscript 𝜃 𝑋 𝑖 subscript superscript 𝜃 𝑌 𝑖 subscript superscript 𝜃 𝑍 𝑖 𝑖 1 𝐻{\overset{}{\theta^{X}{i}},\overset{*}{\theta^{Y}{i}},\overset{}{\theta^{% Z}{i}}}{i=1}^{H}{ over∗ start_ARG italic_θ start_POSTSUPERSCRIPT italic_X end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_ARG , over∗ start_ARG italic_θ start_POSTSUPERSCRIPT italic_Y end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_ARG , over∗ start_ARG italic_θ start_POSTSUPERSCRIPT italic_Z end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_ARG } start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT by minimizing the objective in Eq.9. This process yields the refined bounding-box vertices {V i∗}i=1 H superscript subscript subscript superscript 𝑉 𝑖 𝑖 1 𝐻{V^{}{i}}{i=1}^{H}{ italic_V start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT and their corresponding meshes {B i∗⁢(V i∗,F i)}i=1 H superscript subscript superscript subscript 𝐵 𝑖 subscript superscript 𝑉 𝑖 subscript 𝐹 𝑖 𝑖 1 𝐻{B_{i}^{}(V^{}{i},F{i})}_{i=1}^{H}{ italic_B start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ( italic_V start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_F start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) } start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT.

Due to the non-convex nature of the optimization, it is possible to obtain sub-optimal solutions, resulting in poorly aligned boxes. To address this, we start with a large number H 𝐻 H italic_H of candidate bounding boxes distributed across the mesh’s surface. Then, we apply heuristics to eliminate sub-optimal solutions, as detailed in the following section.

4.3 Filtering

This module aims to select a subset of size H f≪H much-less-than subscript 𝐻 𝑓 𝐻 H_{f}\ll H italic_H start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT ≪ italic_H from the optimized boxes obtained in the previous step. Guided by the utility-vs-security criteria outlined in Sec.3, the goal is to ensure robust watermark security while preserving the utility of the watermarked asset with minimal degradation. Specifically, we do this by incrementally pruning sub-optimal, unnecessary, and redundant watermarks using a series of filtering steps.

We start by rejecting sub-optimal boxes that are poorly oriented (high loss) or are placed on highly salient regions [6] of the mesh. Then, we remove boxes that have poor visibility due to potential occlusion by sub-parts of the original mesh using a ray casting approach. Next, we strategically choose the minimal set of watermarks to meet security criteria. Specifically, the model is divided into eight octants by segmenting it with X, Y, and Z planes. Subsequently, we employ a greedy method to choose one box per octant, aiming to maximize the spacing between selected boxes in the solution set. Following this, we search for locations for watermarks using fixed angle increments of 30° around the X and Z axes. New watermarks are added if no existing watermarks are already positioned at that angle. These two steps ensure that the watermarks are viewable from multiple angles for high watermark security. Please refer to the supplement for more details about these steps.

4.4 Embossing

Having obtained the locations and orientations of the final set of H f subscript 𝐻 𝑓 H_{f}italic_H start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT candidate boxes, we generate the corresponding 3D-text watermark meshes using a standard text-to-3D algorithm [20]. However, these watermarks remain discrete and disconnected objects (individual 3D characters), which could be identified and removed by a knowledgeable attacker, thereby compromising security. Additionally, these watermarks create a flat textual surface on potentially curved surfaces, which can significantly degrade the visual quality (utility) of the watermarked asset (see Fig.2).

We propose a novel method for enhancing watermark meshes by aligning them with the local curvature of the underlying surface through a curve-matching fusion. Initially, we compute the intersection between the target model and the 3D-text watermark meshes. Subsequently, we extrude the intersection result at all vertices in the direction of the bounding box normal by a consistent distance. This adjustment results in updated watermark meshes that conform to the local curvature, maintaining a fixed distance from the underlying surface. Finally, we apply Boolean operations such as union and difference, as described in [16], to achieve embossing or debossing effects. For a detailed algorithm, please refer to the supplementary materials.

5 Experiments

In this section, we analyze and compare the performance of the proposed automated 3D visible watermarking method both quantitatively and qualitatively against the baselines.

Table 1: Comparison results in terms of different watermark quality and asset quality metrics on Manifold40, ObjaVerse, and Meshy datasets. ↑↑\uparrow↑: higher is better. ↓↓\downarrow↓: lower is better.

Baseline Methods. To the best of our knowledge, there is no previous work with the capability of automatically determining the locations of watermarks. Hence, none of the existing baselines is directly applicable to the task of 3D visible watermarking presented in this work. For the sake of comparison, we have re-purposed the most recent and capable baseline Li et al. [16] and assumed random locations on the 3D mesh for placing watermarks in an automated manner. All other methods [2, 6, 24] have significant shortcomings (see Sec.2) which limit their applicability to the practical scenarios of 3D visible watermarking.

Datasets and Implementation Details. We utilize three 3D datasets for our experiments. The first two datasets consist of 50 models randomly sampled from well-known benchmark datasets: Manifold40 [15] and ObjaVerse [11]. The third dataset comprises 20 textured models obtained from the Meshy [18] text-to-3D generative AI service. Our method dynamically chooses the number of watermarks as determined by the filtering step (Sec.4.3). For a fair comparison, in all our experiments, we choose the same number of watermarks for the Li et al. baseline. More implementation details, running time analysis, and statistics of the datasets are given in the supplementary materials.

Evaluation Metrics. For watermark quality, we define three metrics, namely WPS, Ray, and OCR, based on the concepts of watermark placement and visibility defined in Section 3. WPS is computed using Eq.1, except that we approximate the watermark mesh W i subscript 𝑊 𝑖 W_{i}italic_W start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT by its oriented bounding box B i subscript 𝐵 𝑖 B_{i}italic_B start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT to efficiently compute the intersection area. The Ray and OCR scores are computed by Eq.2 using a fixed number of camera views T 𝑇 T italic_T sampled around the X 𝑋 X italic_X and Z 𝑍 Z italic_Z axis. Moreover, for Ray, the kernel 𝒦 𝒱⁢(M′,c t)subscript 𝒦 𝒱 superscript 𝑀′superscript 𝑐 𝑡\mathcal{K_{V}}(M^{\prime},c^{t})caligraphic_K start_POSTSUBSCRIPT caligraphic_V end_POSTSUBSCRIPT ( italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , italic_c start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT ) is implemented via ray casting that approximates viewability by firing multiple rays from the front face of a watermark’s bounding box and checks whether all of them can reach the camera uninterrupted. On the other hand, for OCR, we use a standard OCR method [9] to approximate 𝒦 𝒱⁢(M′,c t)subscript 𝒦 𝒱 superscript 𝑀′superscript 𝑐 𝑡\mathcal{K_{V}}(M^{\prime},c^{t})caligraphic_K start_POSTSUBSCRIPT caligraphic_V end_POSTSUBSCRIPT ( italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , italic_c start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT ), that outputs 1 if at least one watermark can be correctly recognized in the 2D render of a certain view c t superscript 𝑐 𝑡 c^{t}italic_c start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT.

For asset utility, we define five different scores. The first three, Sampled Mean Squared Error (SMSE), Isolated Parts Error (IPE), and Local Curvature Error (LCE) are based on geometric similarity defined in Eq.3. SMSE score is similar to the concept of Mean Squared Error (MSE), except that it is between two meshes instead of a vector of points. The IPE metric calculates the difference in total isolated parts between the watermarked mesh M′superscript 𝑀′M^{\prime}italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT and the original mesh M 𝑀 M italic_M. This difference increases when watermarks are poorly aligned on the model surface, causing certain 3D letters to float above or below the surface. The LCE score assesses curvature preservation post-watermarking (discussed in Sec.4.4). It calculates the variance in distance from the mesh surface to the top of the watermark in watermarked areas. For curve-preserving watermarks, this distance remains constant, resulting in lower variance.

Image 2: Refer to caption

Figure 2: Qualitative analysis of our method with (left) and without (right) curve-matching fusion on an example from Objaverse.

Image 3: Refer to caption

Figure 3: Trade-off results between the watermark quality and asset quality metrics on Manifold40. H f subscript 𝐻 𝑓 H_{f}italic_H start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT: number of watermarks.

The remaining two scores Saliency Error (SE) and Semantic Score (SS) are based on definitions Eq.4 and Eq.5. Specifically, to implement the saliency kernel 𝒦 𝒮⁢(w)subscript 𝒦 𝒮 𝑤\mathcal{K_{S}}(w)caligraphic_K start_POSTSUBSCRIPT caligraphic_S end_POSTSUBSCRIPT ( italic_w ), we pick an off-the-shelf algorithm [21] to compute the saliency map and report the average saliency of the sub mesh w 𝑤 w italic_w as the output. For the semantic kernel 𝒦 ℱ⁢(c o t,c w t)subscript 𝒦 ℱ superscript subscript 𝑐 𝑜 𝑡 superscript subscript 𝑐 𝑤 𝑡\mathcal{K_{F}}(c_{o}^{t},c_{w}^{t})caligraphic_K start_POSTSUBSCRIPT caligraphic_F end_POSTSUBSCRIPT ( italic_c start_POSTSUBSCRIPT italic_o end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT , italic_c start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT ), we compute the cosine similarity between the semantic features of 2D renders for views c o t superscript subscript 𝑐 𝑜 𝑡 c_{o}^{t}italic_c start_POSTSUBSCRIPT italic_o end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT and c w t superscript subscript 𝑐 𝑤 𝑡 c_{w}^{t}italic_c start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT using a pre-trained ResNet50 [13] feature extractor.

5.1 Quantitative Results

The comparison results of our method and the baseline in terms of watermark quality and asset utility aspects are summarized in Tab.1. Overall, our approach outperforms the baseline on all the metrics by a significant amount. In particular, as indicated by the WPS and Ray scores, the watermarks embedded using our method achieve ≈\approx≈20% higher surface alignment (WPS) and multi-view visibility (Ray) compared to the baseline. Moreover, our OCR scores are 7%, 15%, and 16% higher for Manifold40, ObjaVerse, and Meshy, respectively, which show the superiority of our method in watermark text readability.

On the other hand, for the asset utility, our method achieves ≈\approx≈4X lower SMSE error rate on average, showing high geometry similarity between the watermarked and original 3D assets. In addition, we achieve significantly lower IPE and LCE error rates that guarantee very low isolated parts and high curvature preservation after watermarking, respectively. Further, the results also demonstrate our method’s superiority for the preservation of the salient features and semantic context of the asset after watermarking.

In order to study the trade-off between the watermark quality and asset utility (discussed in Sec.3.3), we perform another set of experiments with a different number of watermarks H f={4,16,32}subscript 𝐻 𝑓 4 16 32 H_{f}={4,16,32}italic_H start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT = { 4 , 16 , 32 }. Two trade-off curves including Semantic vs. Placement and SMSE vs. OCR are illustrated in Fig.3. Each point on the curve represents the results averaged over the models in the Manifold40 dataset. In general, increasing the number of watermarks results in higher watermark quality, but lower asset utility. However, compared to the baseline, our method achieves a significantly improved trade-off. In particular, for the same semantic score of ≈\approx≈0.80 (given in Tab.1), our method achieves a placement score of 0.46 which is 18% higher than the baseline. Moreover, our method can achieve an OCR score of 0.60, while providing an SMSE error of 0.0016. On the other hand, the baseline can achieve a much lower OCR score of 0.30 with a much higher SMSE error of 0.006. More trade-off curves are provided in the supplementary materials.

5.2 Qualitative Results

Fig.4 shows the qualitative results of our method and the baseline. The watermarks are colored red to enhance visibility for the reader. Our method generates watermarks that are better aligned with the surface and visible from multiple views. On the other hand, the baseline produces watermarks that either fly out or are hidden under the surface, resulting in poor visibility. Further, in Fig.2, a qualitative analysis of our method with and without the proposed curve-matching fusion module is shown. From the enlarged areas, it is clear that including this fusion enforces the watermarks to follow the surface curvature, which provides higher watermark visibility and asset utility. More qualitative results along with analysis are given in the supplementary materials. Moreover, to subjectively analyze the performance of our method compared to the baseline, we conducted a user study, which is summarized in the supplementary materials.

Image 4: Refer to caption

Image 5: Refer to caption

Figure 4: Visual example of 3D models from Manifold40 (top) and Meshy (bottom) watermarked with our method (left) and Li et al. baseline (right). Ours provides better placement quality, readability, and viewability.

5.3 Ablation Study

We perform an ablation study to analyze the effect of the components of our method including rigid-transform optimization (Sec.4.2), curvature fusion (Sec.4.4), and the filtering steps (Sec.4.3). The corresponding watermark quality and asset utility results on the ObjaVerse dataset are presented in Tab.2. As summarized in the table, excluding any of the optimization or filtering modules negatively affects both watermark quality and asset utility metrics. Specifically, our method without the optimization stage not only provides approximately 20% lower WPS and Ray scores, but it also reduces the SMSE and LCE error rates by 2×\times× and 7×\times×, respectively. For the filtering steps, the results are impacted less as the optimization module contributes more to the overall performance of the method. It is also seen that excluding the curve-matching fusion module results in a much higher LCE error, which shows the significance of this module in preserving the surface curvature of the asset.

Table 2: Ablation over different components of our method on the ObjaVerse dataset. CF: curvature fusion.

5.4 Attack and Robustness Analysis

First, we analyze the robustness of our method in comparison to Li et al. [16] baseline. We consider the cropping and unauthorized removal attacks, which are chosen to reflect strong adversaries. The results are summarized in Tab.3. In the crop attack, an attacker aims to illegitimately crop a significant part of the model. The attack severely impacts both watermark quality and asset utility. Despite this, our method surpasses the baseline as it applies watermarks at multiple angles, thereby preserving more watermarks and leading to improved watermark quality metrics.

On the other hand, for the unauthorized removal attack, we assume the attacker can remove the vertices and faces belonging to the watermark, probably using manual or automated methods. As shown in Tab.3, our method preserves the watermark quality that degrades significantly for the baseline. This is because removing watermarks in our method leaves holes in the mesh surface, creating a silhouette through which the watermark can still be read clearly. In contrast, in baseline, parts of the watermark that are not in direct contact can be completely removed, resulting in the watermark being unreadable. Please refer to the supplement for a qualitative comparison.

Table 3: Quantitative results of our method compared to the baseline against crop and removal attacks.

Lastly, we analyze the performance of our watermarks against the varying strengths of the remeshing attack, which is based on blender’s Remesh Modifier [8]. As depicted in Fig.5, at low attack strength (middle), both asset utility and visible watermarks are reasonably preserved, albeit with loss of texture during remeshing. As the attack strength increases (right), visible watermarks are removed but the asset utility is also significantly impaired, particularly damaging facial features.

Image 6: Refer to caption

Figure 5: Left shows the original model; middle and right show the results of the remeshing attack with low and high strength.

Table 4: Comparison with invisible method, Wang et.al. [31].

5.5 Comparison with Invisible Watermarking

Here, we provide a comparison of performance and robustness with an invisible baseline, Wang et.al. [31]. The performance results are shown in Tab. 4. As expected, the invisible technique performs poorly on the watermark visibility criteria of our benchmark, signaling its inability to meet the demands of the proposed task. Further, we found that this baseline was completely ineffective (<50%absent percent 50<50%< 50 % bit accuracy) against the cropping and remeshing attacks (even low strength), signaling its poor robustness in comparison to the proposed visible method.

6 Conclusion

In this paper, we tackled the novel task of automatically embedding 3D visible watermarks to arbitrary 3D models. We first defined the objectives of the task of 3D visible watermarking in terms of various aspects of watermark and asset quality. Then, we proposed an end-to-end pipeline that uses a gradient-based optimization to achieve high watermark quality and high asset utility. We conducted an extensive set of experiments on two benchmark 3D datasets to demonstrate the effectiveness of our approach. Through our work, we aim to further research in the novel and practical direction of 3D visible watermarking. The limitations of our work are discussed in the supplementary materials.

References

  • [1] National Technical Committee 260 0n Cybersecurity of Standardization Administration of China. Cybersecurity standard practice guide - methods for content identification of generative ai services.
  • [2] X.-C An, Rongrong Ni, and Yao Zhao. Visible watermarking for 3D models based on boundary adaptation and mesh subdivision. 34:503–514, Sept. 2016.
  • [3] O. Benedens. Geometry-based watermarking of 3D models. IEEE Computer Graphics and Applications, 19(1):46–55, Jan. 1999.
  • [4] Adrian G. Bors and Ming Luo. Optimized 3D watermarking for minimal surface distortion. IEEE transactions on image processing: a publication of the IEEE Signal Processing Society, 22(5):1822–1835, May 2013.
  • [5] Virginia Brancato, Joaquim Miguel Oliveira, Vitor Manuel Correlo, Rui Luis Reis, and Subhas C. Kundu. Could 3D models of cancer enhance drug screening? Biomaterials, 232:119744, Feb. 2020.
  • [6] Jinliang Cao, Zhiwei Niu, Anhong Wang, and Li Liu. Reversible Visible Watermarking Algorithm for 3D Models. 2020.
  • [7] François Cayre, Patrice Rondao Alface, Francis Schmitt, Benoit Macq, and Henri Maître. Application of spectral decomposition to compression and watermarking of 3D triangle mesh geometry. Sig. Proc.: Image Comm., 18:309–319, Apr. 2003.
  • [8] Blender contributers. Blender 4.1 manual: Remesh modifier.
  • [9] Keras OCR contributers. A packaged and flexible version of the craft text detector and keras crnn recognition model.
  • [10] Dawson-Haggerty et al. trimesh. python library for loading and using triangular meshes.
  • [11] Matt Deitke, Dustin Schwenk, Jordi Salvador, Luca Weihs, Oscar Michel, Eli VanderBilt, Ludwig Schmidt, Kiana Ehsani, Aniruddha Kembhavi, and Ali Farhadi. Objaverse: A Universe of Annotated 3D Objects, Dec. 2022. arXiv:2212.08051 [cs].
  • [12] Planning for Library of Congress Collections. Wavefront obj file format.
  • [13] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition, Dec. 2015. arXiv:1512.03385 [cs].
  • [14] The White House. Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, Oct. 2023.
  • [15] Shi-Min Hu, Zheng-Ning Liu, Meng-Hao Guo, Jun-Xiong Cai, Jiahui Huang, Tai-Jiang Mu, and Ralph R. Martin. Subdivision-Based Mesh Convolution Networks. ACM Transactions on Graphics, 41(3):1–16, June 2022. arXiv:2106.02285 [cs].
  • [16] An-Bo Li, Hao Chen, and Xian-Li Xie. Visible watermarking for 3D models based on 3D Boolean operation. Egyptian Informatics Journal, 25:100436, Mar. 2024.
  • [17] Matthew Liberatore and William Wagner. Virtual, mixed, and augmented reality: a systematic review for immersive systems research. Virtual Reality, 25:1–27, Sept. 2021.
  • [18] Meshy LLC. Meshy - Free AI 3D Model Generator. https://www.meshy.ai.
  • [19] EU parliament Members. EU AI Act: first regulation on artificial intelligence, Aug. 2023.
  • [20] Harishankar Narayanan. codetiger/Font23D, July 2024. original-date: 2015-04-24T08:53:52Z.
  • [21] Stavros Nousias, Gerasimos Arvanitis, Aris S. Lalos, and K. Moustakas. Mesh Saliency Detection Using Convolutional Neural Networks. IEEE International Conference on Multimedia and Expo, 2020.
  • [22] T. O’Hailey. Hybrid Animation: Integrating 2D and 3D Assets. Focal Press, 2010.
  • [23] Pranav Parekh, Shireen Patel, Nivedita Patel, and Manan Shah. Systematic review and meta-analysis of augmented reality in medicine, retail, and games. Visual Computing for Industry, Biomedicine, and Art, 3(1):21, Sept. 2020.
  • [24] Fei Peng, Wenjie Qian, and Min Long. Visible Reversible Watermarking for 3D Models Based on Mesh Subdivision. In Xianfeng Zhao, Yun-Qing Shi, Alessandro Piva, and Hyoung Joong Kim, editors, Digital Forensics and Watermarking, Lecture Notes in Computer Science, pages 136–149, Cham, 2021. Springer International Publishing.
  • [25] Saeed Ranjbar Alvar, Mohammad Akbari, Lingyang Chu, Yong Zhang, et al. Amuse: Adaptive multi-segment encoding for dataset watermarking. arXiv preprint arXiv:2403.05628, 2024.
  • [26] Saeed Ranjbar Alvar, Mohammad Akbari, David Yue, and Yong Zhang. Nft-based data marketplace with digital watermarking. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 4756–4767, 2023.
  • [27] Nikhila Ravi, Jeremy Reizenstein, David Novotny, Taylor Gordon, Wan-Yen Lo, Justin Johnson, and Georgia Gkioxari. Accelerating 3D Deep Learning with PyTorch3D, July 2020. arXiv:2007.08501 [cs].
  • [28] Ahmad Rezaei, Mohammad Akbari, Saeed Ranjbar Alvar, Arezou Fatemi, and Yong Zhang. Lawa: Using latent space for in-generation image watermarking. arXiv preprint arXiv:2408.05868, 2024.
  • [29] Francesca Uccheddu, Massimiliano Corsini, and Mauro Barni. Wavelet-based blind watermarking of 3D models. pages 143–154, Sept. 2004.
  • [30] Mythreye Venkatesan, Harini Mohan, Justin R. Ryan, Christian M. Schürch, Garry P. Nolan, David H. Frakes, and Ahmet F. Coskun. Virtual and augmented reality for biomedical applications. Cell Reports. Medicine, 2(7):100348, July 2021.
  • [31] Shengxian Wang, Li Li, Jianfeng Lu, and Ching-Chun Chang. A Watermarking Method for 3D Game Model Based on FCM Clustering and Density Tag Estimation of Vertex Set. In Lakhmi C. Jain, Sheng-Lung Peng, and Shiuh-Jeng Wang, editors, Security with Intelligent Computing and Big-Data Services 2019, pages 139–156, Cham, 2020. Springer International Publishing.
  • [32] Xiangyang Xu, Shengzhou Xu, Lianghai Jin, and Enmin Song. Characteristic analysis of Otsu threshold and its applications. Pattern Recognition Letters, 32:956–961, May 2011.
  • [33] Innfarn Yoo, Huiwen Chang, Xiyang Luo, Ondrej Stava, Ce Liu, Peyman Milanfar, and Feng Yang. Deep 3D-to-2D Watermarking: Embedding Messages in 3D Meshes and Extracting Them from 2D Renderings, Mar. 2022. arXiv:2104.13450 [cs, eess].
  • [34] Innfarn Yoo, Huiwen Chang, Xiyang Luo, Ondrej Stava, Ce Liu, Peyman Milanfar, and Feng Yang. Deep 3d-to-2d watermarking: Embedding messages in 3d meshes and extracting them from 2d renderings. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10031–10040, 2022.

Appendix A Supplementary Materials

In this appendix, we present the supplementary materials for the paper titled ”Towards Secure and Usable 3D Assets: A Novel Framework for Automatic Visible Watermarking”.

A.1 Code and Demo

A.2 Datasets Statistics

As stated in Sec. 5, we sample a subset of 50 models from two benchmark 3D datasets, namely, Manifold40 and ObjaVerse. Particularly, we used random sampling stratified by output classes from the train set of the respective datasets. Additionally, for the Meshy GenAI dataset, we downloaded 20 textured models generated using the Meshy text-to-3D AI service. The statistics of the vertices and faces of these datasets are presented in Tab.5.

Table 5: Dataset Statistics.

Table 6: User study results over GenAI Meshy dataset watermarked with our method and the baseline.

A.3 User Study

In order to subjectively analyze the performance of our method compared to the baseline, we conducted a user study involving 10 volunteer participants. Each participant was randomly presented with either a textured or untextured 3D object from the GenAI Meshy dataset, watermarked using our method or the baseline. Participants were then asked to answer the following six ”yes/no” questions assessing the watermark quality and utility of the displayed 3D object:

  • •Are the watermarks visible from different views?
  • •Are the watermarks’ placement and orientation good?
  • •Are the watermark texts readable?
  • •Is the asset’s geometry/shape preserved?
  • •Is the asset’s semantics preserved?
  • •Are the asset’s salient areas protected?

In total, 373 data samples were collected, where a value of 1 and 0 were respectively assigned to the ”yes” and ”no” answers. The averaged numerical results across all samples are summarized in Tab.6. As shown in the table, the users gave significantly higher scores to our method for both textured and untextured objects in terms of the visibility of the watermarks from multiple views (Visibility), placement and orientation (Placement), and textual readability (Readability) of the watermarks. Specifically, across textured and untextured cases, the baseline scored approximately 46%, 68%, and 64% lower than our method for placement, readability, and visibility, respectively.

On the other hand, for asset utility, users rated our and the baseline method similarly in terms of preserving the overall semantics and context (Semantics) of the asset after watermarking. However, our method demonstrated superior performance compared to the baseline in preserving the geometry (Geometry) and salient features (Saliency) of the asset, achieving approximately 37% and 0.26% higher scores, respectively.

Overall, users generally rated our method slightly higher for watermark quality on untextured objects compared to our method on textured ones. This discrepancy is often due to texture (i.e., color information), which can significantly influence the visibility and readability of watermarks, particularly when the watermark color closely matches the asset’s texture. We addressed this issue as a limitation of our method in Sec.A.14, highlighting its importance for future improvements.

Image 7: Refer to caption

Figure 6: Average runtime (x-axis) required for watermarking models having an average number of vertices (y-axis) for Manifold40 (left) and ObjaVerse (right) datasets, respectively.

A.4 Runtime Analysis

In this section, we provide a runtime analysis of our method. We count all the time required for end-to-end watermarking of an asset, including any preprocessing time, candidate generation, optimization, filtering, and embossing time. The plots for the average runtime (in seconds) corresponding to the average number of vertices are reported in Fig.6 for the Manifold40 and ObjaVerse datasets, respectively. Based on empirical analysis, the overall runtime grows linearly with the number of vertices of the target model. As seen in the plots, a model of 60K vertices requires ≈\approx≈ 30s, and a model of 1.2M vertices requires ≈\approx≈ 180s for watermarking.

Image 8: Refer to caption

Figure 7: More trade-off results between the watermark quality and asset quality metrics on Manifold40. H f subscript 𝐻 𝑓 H_{f}italic_H start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT : number of watermarks.

A.5 Watermark Quality vs. Asset Utility Trade-off

In Sec. 5.1, we studied the trade-off between the watermark quality and asset utility by performing experiments with different numbers of watermarks H f={4,16,32}subscript 𝐻 𝑓 4 16 32 H_{f}={4,16,32}italic_H start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT = { 4 , 16 , 32 }. Two trade-off curves including Semantic vs. and Placement, and SMSE vs. OCR were illustrated.

In this section, four more trade-off curves in terms of SMSE vs. Placement, Semantic vs. OCR, IPE vs. OCR, and IPE vs. Placement are shown in Fig.7. Similar to the trade-off curves provided in the main body of the paper, increasing the number of watermarks results in higher watermark quality, but lower asset utility. However, our method achieves significantly improved trade-off results compared to the baseline ones. For example, our method can achieve an OCR score of ≈\approx≈ 0.85, while providing an IPE error of 18.0. On the other hand, the baseline can achieve a much lower OCR score of 0.30 with a higher IPE error of 20.

Additionally, as shown in Fig.7, the effect of the number of watermarks on the placement score vs. the geometry-based SMSE error is very minor. In other words, regardless of the number of watermarks, our method can effectively find the optimal locations and orientations to emboss the watermark without damaging the overall geometry of the original asset.

Algorithm 1 Candidate Box Generation

0:Target coordinates

P C i superscript subscript 𝑃 𝐶 𝑖 P_{C}^{i}italic_P start_POSTSUBSCRIPT italic_C end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT , target normal

P N i superscript subscript 𝑃 𝑁 𝑖 P_{N}^{i}italic_P start_POSTSUBSCRIPT italic_N end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT , watermark string

Z m subscript 𝑍 𝑚 Z_{m}italic_Z start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT , watermark size

Z s subscript 𝑍 𝑠 Z_{s}italic_Z start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT , watermark font

Z f subscript 𝑍 𝑓 Z_{f}italic_Z start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT

1:

V i w⁢s,F i w⁢s←text_to_3d⁢(Z m,Z s,Z f)←superscript subscript 𝑉 𝑖 𝑤 𝑠 superscript subscript 𝐹 𝑖 𝑤 𝑠 text_to_3d subscript 𝑍 𝑚 subscript 𝑍 𝑠 subscript 𝑍 𝑓 V_{i}^{ws},F_{i}^{ws}\leftarrow\texttt{text_to_3d}(Z_{m},Z_{s},Z_{f})italic_V start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_w italic_s end_POSTSUPERSCRIPT , italic_F start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_w italic_s end_POSTSUPERSCRIPT ← text_to_3d ( italic_Z start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT , italic_Z start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT , italic_Z start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT )

2:

V i b⁢s,F i b⁢s←oriented_bounding_box⁢(V i w⁢s,F i w⁢s)←superscript subscript 𝑉 𝑖 𝑏 𝑠 superscript subscript 𝐹 𝑖 𝑏 𝑠 oriented_bounding_box superscript subscript 𝑉 𝑖 𝑤 𝑠 superscript subscript 𝐹 𝑖 𝑤 𝑠 V_{i}^{bs},F_{i}^{bs}\leftarrow\texttt{oriented_bounding_box}(V_{i}^{ws},F_{% i}^{ws})italic_V start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_b italic_s end_POSTSUPERSCRIPT , italic_F start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_b italic_s end_POSTSUPERSCRIPT ← oriented_bounding_box ( italic_V start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_w italic_s end_POSTSUPERSCRIPT , italic_F start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_w italic_s end_POSTSUPERSCRIPT )

3:

α i,β i,γ i←compute_angles⁢([0,0,1],P i N)←superscript 𝛼 𝑖 superscript 𝛽 𝑖 superscript 𝛾 𝑖 compute_angles 0 0 1 superscript subscript 𝑃 𝑖 𝑁\alpha^{i},\beta^{i},\gamma^{i}\leftarrow\texttt{compute_angles}([0,0,1],P_{i% }^{N})italic_α start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT , italic_β start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT , italic_γ start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT ← compute_angles ( [ 0 , 0 , 1 ] , italic_P start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT )

4:

R i←generate_rotation_matrix⁢(α i,β i,γ i)←subscript 𝑅 𝑖 generate_rotation_matrix subscript 𝛼 𝑖 subscript 𝛽 𝑖 subscript 𝛾 𝑖 R_{i}\leftarrow\texttt{generate_rotation_matrix}(\alpha_{i},\beta_{i},\gamma% _{i})italic_R start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ← generate_rotation_matrix ( italic_α start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_β start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_γ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT )

5:

T i←generate_translation_matrix⁢(P i C)←subscript 𝑇 𝑖 generate_translation_matrix superscript subscript 𝑃 𝑖 𝐶 T_{i}\leftarrow\texttt{generate_translation_matrix}(P_{i}^{C})italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ← generate_translation_matrix ( italic_P start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_C end_POSTSUPERSCRIPT )

6:

V i←T i⋅R i⋅V i b⁢s←subscript 𝑉 𝑖⋅subscript 𝑇 𝑖 subscript 𝑅 𝑖 superscript subscript 𝑉 𝑖 𝑏 𝑠 V_{i}\leftarrow T_{i}\cdot R_{i}\cdot V_{i}^{bs}italic_V start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ← italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ⋅ italic_R start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ⋅ italic_V start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_b italic_s end_POSTSUPERSCRIPT

7:Output:

V i subscript 𝑉 𝑖 V_{i}italic_V start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT

A.6 Implementation Details

In this section, we provide more implementation details of our work. As stated in Sec. 3, the task of 3D visible watermarking has three inputs, namely 1) target model, 2) watermark text, and 3) algorithm parameters such as watermark text font, thickness, and size. We fix the input parameters for all our experiments unless stated otherwise. For input (1), since our method does not depend on texture information, we remove texture information from input models and convert them into standard OBJ file format [12].

However, our method supports watermarking textured objects, which is done by simply replacing the untextured original model with the textured one during the embossing step. We have provided some qualitative results of watermarking textured models in Fig.15. Other than that, to avoid any variability in metrics computation, we stick to untextured models that are scaled to a fixed size of 30 30 30 30 and centered at origin (0,0,0)0 0 0(0,0,0)( 0 , 0 , 0 ). Additionally, to preserve computational resources, we decimate the models to keep the number of vertices below 80,000 80 000 80,000 80 , 000 for generating watermark boxes by our algorithm. However, during the watermark embossing step, we still use the original undecimated model. Further, we always use “watermark” as the watermark text (input 2) for all our experiments and use the default text font as provided by the off-the-shelf library [20] for converting text to 3D mesh. We use the thickness (distance between the front and back faces of the watermark) of 0.5 and a fixed watermark size (scale of mesh) of 4 in all our experiments unless stated otherwise. Finally, in the embossing module, we use a fixed value of 0.05 as the extrude strength in Algorithm 2.

We sample a fixed number of H s=300 subscript 𝐻 𝑠 300 H_{s}=300 italic_H start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT = 300 points for generating the initial number of candidate boxes. From this initial set, we obtain the final H 𝐻 H italic_H by rejecting points that are too close (radius H r<1 subscript 𝐻 𝑟 1 H_{r}<1 italic_H start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT < 1). The number of final watermarks after filtering H f subscript 𝐻 𝑓 H_{f}italic_H start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT is variable for each model, whose average value is 9.97 9.97 9.97 9.97 for Manifold40 and 8.82 8.82 8.82 8.82 for ObjaVerse. The value of J 𝐽 J italic_J, which is the number of sampled points for computing the alignment loss (Eq. 9) is fixed to be 179 179 179 179, including the midpoints. For optimizing the objective in Eq. 8, we run a fixed number 200 200 200 200 of gradient descent steps and use stopping criteria of mean loss less than 0.005 0.005 0.005 0.005.

We used a 12-CPU-core machine with two NVIDIA GeForce GTX 1080 to run our experiments. All our code was implemented in Python 3.10 and the optimization objective including gradient back-propagation was implemented using Pytorch 3D [27]. We use off-the-shelf 3D libraries to implement many common mesh operations in this work. Note that, our work can handle all 3D models which can be converted to a mesh and which support 3D Boolean operations. We can simply convert the given format into the mesh model to obtain watermark locations and apply Boolean operations in the end to fuse watermarks into the target format.

Image 9: Refer to caption

Figure 8: Effect of optimization on candidate boxes. The left figure shows misaligned candidate boxes placed using Algorithm 1 that are fixed (right) by either moving, rotating, or tilting these boxes using the proposed rigid body optimization.

A.7 Initialization (more details)

In this section, we provide more specific details of the Initialization module (Sec. 4.1). As mentioned earlier, we sample H 𝐻 H italic_H equidistant points on the surface of the target model. Specifically, we start by randomly sampling H s subscript 𝐻 𝑠 H_{s}italic_H start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT points {(x,y,z)|x,y,z∈ℝ}i=1 H s superscript subscript conditional-set 𝑥 𝑦 𝑧 𝑥 𝑦 𝑧 ℝ 𝑖 1 subscript 𝐻 𝑠{(x,y,z)|x,y,z\in\mathbb{R}}{i=1}^{H{s}}{ ( italic_x , italic_y , italic_z ) | italic_x , italic_y , italic_z ∈ blackboard_R } start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_H start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT end_POSTSUPERSCRIPT on the surface of the target model. Then, we reject the points that are closer to each other than a radius of H r subscript 𝐻 𝑟 H_{r}italic_H start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT. After this step, we denote the final set of points, that are approximately equidistant, by {P C i}i=1 H superscript subscript superscript subscript 𝑃 𝐶 𝑖 𝑖 1 𝐻{{P}{C}^{i}}{i=1}^{H}{ italic_P start_POSTSUBSCRIPT italic_C end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT } start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT and their corresponding surface normal by {P N i}i=1 H superscript subscript superscript subscript 𝑃 𝑁 𝑖 𝑖 1 𝐻{{P}{N}^{i}}{i=1}^{H}{ italic_P start_POSTSUBSCRIPT italic_N end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT } start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT.

Then, for each of these sampled points, we use the procedure in Algorithm 1 to generate the candidate boxes. Specifically, we start (Lines 1-2) by generating a watermark mesh W i s⁢(V i w⁢s,F i w⁢s)subscript superscript 𝑊 𝑠 𝑖 subscript superscript 𝑉 𝑤 𝑠 𝑖 subscript superscript 𝐹 𝑤 𝑠 𝑖 W^{s}{i}(V^{ws}{i},F^{ws}{i})italic_W start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_V start_POSTSUPERSCRIPT italic_w italic_s end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_F start_POSTSUPERSCRIPT italic_w italic_s end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) and its bounding box B i s⁢(V i b⁢s,F i b⁢s)subscript superscript 𝐵 𝑠 𝑖 superscript subscript 𝑉 𝑖 𝑏 𝑠 superscript subscript 𝐹 𝑖 𝑏 𝑠 B^{s}{i}(V_{i}^{bs},F_{i}^{bs})italic_B start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_V start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_b italic_s end_POSTSUPERSCRIPT , italic_F start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_b italic_s end_POSTSUPERSCRIPT ) using off-the-shelf algorithms text_to_3d and oriented_bounding_box. We configure these algorithms to make sure that these meshes are generated at origin (0,0,0)0 0 0(0,0,0)( 0 , 0 , 0 ) and the face of the 3D text faces towards the +Z 𝑍+Z+ italic_Z direction (0, 0, 1), also referred to as front direction. Then, we perform a rigid-body transformation operation (Lines 3-6) to transport the box at the i t⁢h superscript 𝑖 𝑡 ℎ i^{th}italic_i start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT sampled location P i C superscript subscript 𝑃 𝑖 𝐶 P_{i}^{C}italic_P start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_C end_POSTSUPERSCRIPT and align the box along its normal P i N subscript superscript 𝑃 𝑁 𝑖{P}^{N}{i}italic_P start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. Specifically, we use the compute_angles routine (Line 3) to compute the angles between the front direction of the box (0,0,1)0 0 1(0,0,1)( 0 , 0 , 1 ) and the target direction P i N subscript superscript 𝑃 𝑁 𝑖{P}^{N}{i}italic_P start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. We use these angles α i,β i,γ i subscript 𝛼 𝑖 subscript 𝛽 𝑖 subscript 𝛾 𝑖\alpha_{i},\beta_{i},\gamma_{i}italic_α start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_β start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_γ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT to for the rotation R i subscript 𝑅 𝑖 R_{i}italic_R start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT (Line 4) and the target location P i C superscript subscript 𝑃 𝑖 𝐶 P_{i}^{C}italic_P start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_C end_POSTSUPERSCRIPT to compute the translation matrix T i subscript 𝑇 𝑖 T_{i}italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT (Line 5). Finally, we use these rotation and translation matrices for the final transform (Line 6) to obtain the transformed vertices V i subscript 𝑉 𝑖 V_{i}italic_V start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT.

A.8 Finetuning (visualization)

In this section, we present visualizations before and after optimization in the finetuning module (Sec. 4.2). As shown in Fig.8-left, the boxes placed using the initialization module are misaligned with the surface of the dolphin’s body. After optimization (Fig.8-right), the boxes’ alignment is corrected, and they are positioned accurately to follow the curvature of the dolphin’s surface. Specifically, the optimization involves three operations: tilting, rotating, or moving the boxes to achieve proper alignment. For instance, the cyan box situated at the top fin of the dolphin cannot be rotated or tilted and thus needs to be relocated to improve its alignment. Conversely, many boxes on the body can be adjusted by simply rotating or tilting them to correct their alignment.

Image 10: Refer to caption

Figure 9: Step-by-Step Filtering Process: This illustration visualizes the progressive filtering of bounding boxes. From left to right, each image displays the remaining boxes after applying a specific filter. The first image shows the result after the low roughness score filter. The second image depicts the results after discarding boxes with low loss. The third image presents the outcome of filtering out overlapping and occluded boxes. Finally, the fourth image displays the final set after applying a multi-octant and multi-angle visibility filter.

Algorithm 2 Curve Matching Fusion

0:original mesh

M 𝑀 M italic_M , watermark meshes

{W f i}i=1 H f superscript subscript superscript subscript 𝑊 𝑓 𝑖 𝑖 1 subscript 𝐻 𝑓{W_{f}^{i}}{i=1}^{H{f}}{ italic_W start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT } start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_H start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT end_POSTSUPERSCRIPT , extrude strength

H y subscript 𝐻 𝑦 H_{y}italic_H start_POSTSUBSCRIPT italic_y end_POSTSUBSCRIPT

1:for

W i∈{W i}i=1 H f subscript 𝑊 𝑖 superscript subscript subscript 𝑊 𝑖 𝑖 1 subscript 𝐻 𝑓 W_{i}\in{W_{i}}{i=1}^{H{f}}italic_W start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∈ { italic_W start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_H start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT end_POSTSUPERSCRIPT do

2:

W¯i←boolean_intersection⁢(M,W i)←subscript¯𝑊 𝑖 boolean_intersection 𝑀 subscript 𝑊 𝑖\overline{W}{i}\leftarrow\texttt{boolean_intersection}(M,W{i})over¯ start_ARG italic_W end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ← boolean_intersection ( italic_M , italic_W start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT )

3:

N i E←closest_normal⁢(W i)←subscript superscript 𝑁 𝐸 𝑖 closest_normal subscript 𝑊 𝑖 N^{E}{i}\leftarrow\texttt{closest_normal}(W{i})italic_N start_POSTSUPERSCRIPT italic_E end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ← closest_normal ( italic_W start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT )

4:

W i←perform_extrusion⁢(W¯i,N i W,H y)←subscript 𝑊 𝑖 perform_extrusion subscript¯𝑊 𝑖 subscript superscript 𝑁 𝑊 𝑖 subscript 𝐻 𝑦\tilde{W_{i}}\leftarrow\texttt{perform_extrusion}(\overline{W}{i},N^{W}{i},% H_{y})over~ start_ARG italic_W start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_ARG ← perform_extrusion ( over¯ start_ARG italic_W end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_N start_POSTSUPERSCRIPT italic_W end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_H start_POSTSUBSCRIPT italic_y end_POSTSUBSCRIPT )

5:end for

6:

M′←boolean_union⁢(M,{W i}i=1 H f)←superscript 𝑀′boolean_union 𝑀 superscript subscriptsubscript 𝑊 𝑖 𝑖 1 subscript 𝐻 𝑓 M^{\prime}\leftarrow\texttt{boolean_union}(M,{\tilde{W_{i}}}{i=1}^{H{f}})italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ← boolean_union ( italic_M , { over~ start_ARG italic_W start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_ARG } start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_H start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT end_POSTSUPERSCRIPT )

7:Output:

M′superscript 𝑀′M^{\prime}italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT

A.9 Filtering (more details)

In this section, we provide more details about the individual filtering steps discussed in Sec. 4.3. Going from left to right, Fig.9 shows the results of various steps of filtering operation. As seen, each filtering step prunes the undesirable boxes and keeps the most important boxes with an aim of high watermark visibility and high asset utility.

Well Aligned Boxes with Low Loss: The loss defined in Eq. 9 quantifies the alignment accuracy of the box with the mesh surface. To reject sub-optimal boxes that are misaligned, we apply straightforward thresholding on the individual box loss. Through observation, we have determined that a loss less than 0.005 0.005 0.005 0.005 typically indicates well-aligned boxes.

Boxes with Low Local Roughness: We calculate the local roughness beneath each candidate bounding box and discard boxes exceeding a specific threshold. Here’s the detailed procedure. First, we identify all vertices within the i 𝑖 i italic_i-th bounding box B i subscript 𝐵 𝑖 B_{i}italic_B start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. From these vertices, we randomly sample H r subscript 𝐻 𝑟 H_{r}italic_H start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT points and compute the average cross-product of their normals. The local roughness score is defined as the inverse of this average cross-product ℛ⁢(B i)=1 H r 2⁢∑j=1 H r∑k=1 H r 1 cos⁡(N j i,N k i)ℛ subscript 𝐵 𝑖 1 superscript subscript 𝐻 𝑟 2 superscript subscript 𝑗 1 subscript 𝐻 𝑟 superscript subscript 𝑘 1 subscript 𝐻 𝑟 1 superscript subscript 𝑁 𝑗 𝑖 superscript subscript 𝑁 𝑘 𝑖\mathcal{R}(B_{i})=\frac{1}{{H_{r}}^{2}}\sum_{j=1}^{H_{r}}\sum_{k=1}^{H_{r}}% \frac{1}{\cos(N_{j}^{i},N_{k}^{i})}caligraphic_R ( italic_B start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) = divide start_ARG 1 end_ARG start_ARG italic_H start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG ∑ start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_H start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT end_POSTSUPERSCRIPT ∑ start_POSTSUBSCRIPT italic_k = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_H start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT end_POSTSUPERSCRIPT divide start_ARG 1 end_ARG start_ARG roman_cos ( italic_N start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT , italic_N start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT ) end_ARG where (N j i,N k i)superscript subscript 𝑁 𝑗 𝑖 superscript subscript 𝑁 𝑘 𝑖(N_{j}^{i},N_{k}^{i})( italic_N start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT , italic_N start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT ) are the normals of points inside box B i subscript 𝐵 𝑖 B_{i}italic_B start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. Through analysis, we have determined that a roughness score less than 1.25 1.25 1.25 1.25 typically indicates boxes located on flatter surfaces.

Non-Overlapping Boxes: To handle potential overlaps among candidate boxes, we employ a greedy approach. Initially, we randomly select a box and iteratively discard any box that overlaps with those already chosen. Overlap is determined by checking for intersections among the vertices of the original mesh contained within pairs of bounding boxes.

Non-Occluding Boxes: To mitigate potential occlusions of some boxes by parts of the target model, such as under the arms or between the thighs in humanoid models, leading to diminished watermark visibility, we utilize a ray casting method. This approach helps identify and subsequently remove watermarks that are occluded. We sample equidistant points from the front face of each bounding box and cast rays along the normal direction of the watermark. If any of these cast rays intersect with parts of the target model, the watermark is classified as occluded and is subsequently removed.

Multi-Octant Presence: To deter model theft, watermarks should be distributed across diverse locations of the model. We achieve this by dividing the model into 8 octants using planes along the X 𝑋 X italic_X, Y 𝑌 Y italic_Y, and Z 𝑍 Z italic_Z axes passing through the model’s centroid. Each octant is assigned a watermark. If multiple watermark options exist per octant, we select the one furthest from the watermarks in adjacent octants.

Multi-Angle Visibility: In this step, we add extra boxes to ensure the watermark is visible from multiple viewing angles. This prevents attackers from using 2D renders of a 3D object where the watermark might not be visible due to camera angles. Our goal is to position at least one watermark on the visible portion of the model’s surface for each viewing angle. To achieve this, we iterate through fixed angle increments of 30° around the X 𝑋 X italic_X and Z 𝑍 Z italic_Z axes and add a watermark if no other existing watermarks are found for that angle.

A.10 Watermark Embossing

In this section, we provide more details of the novel curve-matching fusion presented in Sec. 4.4. We start by generating 3D-text watermark meshes {W i}i=1 H f superscript subscript subscript 𝑊 𝑖 𝑖 1 subscript 𝐻 𝑓{W_{i}}{i=1}^{H{f}}{ italic_W start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_H start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT end_POSTSUPERSCRIPT using a standard text-to-3D algorithm [20], positioned and oriented according to selected bounding boxes {B f i}i=1 H f superscript subscript superscript subscript 𝐵 𝑓 𝑖 𝑖 1 subscript 𝐻 𝑓{B_{f}^{i}}{i=1}^{H{f}}{ italic_B start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT } start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_H start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT end_POSTSUPERSCRIPT. Then, given the target mesh M 𝑀 M italic_M and the generated 3D watermarks {W i}i=1 H f superscript subscript subscript 𝑊 𝑖 𝑖 1 subscript 𝐻 𝑓{W_{i}}{i=1}^{H{f}}{ italic_W start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_H start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT end_POSTSUPERSCRIPT, we use Algorithm 2 to obtain the watermarked mesh M′superscript 𝑀′M^{\prime}italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT. Specifically, first, we apply a boolean intersection operation (Line 2) between the original mesh M 𝑀 M italic_M and i 𝑖 i italic_i-th watermark mesh W i subscript 𝑊 𝑖 W_{i}italic_W start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. Then, we find the extruding normal N i E subscript superscript 𝑁 𝐸 𝑖 N^{E}{i}italic_N start_POSTSUPERSCRIPT italic_E end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT by computing the normal of the closest point on the mesh (Line 3) from the centroid of watermark mesh W i subscript 𝑊 𝑖 W{i}italic_W start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. Next, we perform the extrusion operation (Line 4) of the intersection W i¯¯subscript 𝑊 𝑖\bar{W_{i}}over¯ start_ARG italic_W start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_ARG silhouette to give an embossing effect. Finally, we simply take a Boolean union [16] of the extruded watermark meshes {W i}i=1 H f superscript subscriptsubscript 𝑊 𝑖 𝑖 1 subscript 𝐻 𝑓{\tilde{W_{i}}}{i=1}^{H{f}}{ over~ start_ARG italic_W start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_ARG } start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_H start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT end_POSTSUPERSCRIPT and the original mesh M 𝑀 M italic_M to obtain the final watermarked result M′superscript 𝑀′M^{\prime}italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT.

A.11 Evaluation Metrics

In this section, we provide additional details of the evaluation metrics discussed in Sec. 5.

A.11.1 Watermark Quality

Watermark Placement Score (WPS): WPS measures the alignment between the watermarks and the mesh. The inputs consist of a mesh M 𝑀 M italic_M and a bounding box B i subscript 𝐵 𝑖 B_{i}italic_B start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. We start by computing vertices of the mesh M 𝑀 M italic_M that lie inside the bounding box B i subscript 𝐵 𝑖 B_{i}italic_B start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and denote them as V i⁢n superscript 𝑉 𝑖 𝑛 V^{in}italic_V start_POSTSUPERSCRIPT italic_i italic_n end_POSTSUPERSCRIPT. Then, we find all faces that have at least one vertex in the set V i⁢n superscript 𝑉 𝑖 𝑛 V^{in}italic_V start_POSTSUPERSCRIPT italic_i italic_n end_POSTSUPERSCRIPT and denote them as F c superscript 𝐹 𝑐 F^{c}italic_F start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT. From these faces, we only consider faces that lie completely inside the bounding box (all three vertices in V i⁢n superscript 𝑉 𝑖 𝑛 V^{in}italic_V start_POSTSUPERSCRIPT italic_i italic_n end_POSTSUPERSCRIPT) and compute their areas. Then, we project these areas in the direction of the front face of the box. Finally, we sum up the areas and divide the sum by the area of the front face of the box to compute the watermark placement score. For multiple bounding boxes, we simply report the mean score computed across multiple B i subscript 𝐵 𝑖 B_{i}italic_B start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT s. Note that, we used this approximate procedure to compute area for efficiency purposes as the standard methods do not scale well to a large number of vertices.

Ray Casting Visibility (Ray): Ray measures the visibility of watermarks from all views of the model. We begin by generating views of the watermarked mesh M′superscript 𝑀′M^{\prime}italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT by rotating the camera around the X 𝑋 X italic_X and Z 𝑍 Z italic_Z axes in 30° increments. For each view c w t superscript subscript 𝑐 𝑤 𝑡 c_{w}^{t}italic_c start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT, we identify candidate watermark meshes oriented within 45 degrees of the camera’s direction. Using ray casting, multiple random rays are projected from the top face of each candidate’s bounding box towards the camera. A per-watermark score of 1 is assigned if all rays reach the camera without obstruction; otherwise, it is 0. The per-view score is computed by averaging across all per-watermark scores in that view. Finally, the final ray score is obtained by averaging overall per-view scores.

OCR Visibility (OCR): OCR measures the readability of watermarks from all views of the model. We begin by generating renders of the watermarked mesh M′superscript 𝑀′M^{\prime}italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT for each view c w t superscript subscript 𝑐 𝑤 𝑡 c_{w}^{t}italic_c start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT, obtained by rotating the camera around the X 𝑋 X italic_X and Z 𝑍 Z italic_Z axes in increments of 30°. Next, we utilize an off-the-shelf OCR detector [9] to identify candidate 2D boxes that potentially contain readable text. Subsequently, to account for text orientations that are not left-to-right aligned, we augment the candidate boxes by adding rotations of 90°, 180°, and 270°. Then, we use an off-the-shelf OCR recognizer [9] to generate candidate text recognitions. These candidates are then scored using a popular sequence matcher [5] to quantify their similarity to the ground truth watermark text. Finally, for each view c w t superscript subscript 𝑐 𝑤 𝑡 c_{w}^{t}italic_c start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT, we take the maximum score and average these scores across all views to obtain the final OCR score.

A.11.2 Asset Utility

Sampled Mean Squared Error (SMSE): SMSE aims to compute the Mean Squared Error (MSE) between the watermarked mesh and the original mesh. Since the number of vertices and faces changes after watermarking, it is not possible to compute the MSE directly. Instead, we start by randomly sampling a large number of points on the surface of the watermarked mesh M′superscript 𝑀′M^{\prime}italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT. Then, we compute the distances of these sampled points from the original mesh M using a standard routine in the Trimesh package [10]. Finally, we report the inverse of the mean distance values as the final SMSE score.

Isolated Parts Error (IPE): IPE is a measurement of the change in mesh topology before and after watermarking. It is computed as the difference in the total number of isolated parts between the watermarked mesh M′superscript 𝑀′M^{\prime}italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT and the original mesh M 𝑀 M italic_M. The motivation is based on the intuition that the number of isolated parts in a model should remain identical after watermarking. An increased IPE captures the cases when a part of the watermark text is disconnected from the model surface. Such isolated parts degrade the asset utility and can be easily removed to damage the watermark message. Lower IPE indicates less change in the mesh topology and therefore better watermark placement.

Local Curvature Error (LCE): LCE measures how well the surface curvature of watermarked areas is preserved (as discussed in Sec. 4.4 and Sec.A.10). For each vertex on the top face of a watermark, the distance to its nearest neighbor on the original mesh surface M 𝑀 M italic_M is computed. This process is repeated for every vertex and watermark, and the LCE is calculated as the variance of these distances. A lower LCE indicates that the watermark conforms to the surface curvature, while a higher LCE indicates deviation from the underlying surface curves.

Saliency Error (SE): SE is designed to assess if any of the watermark placed covers the salient features of the original mesh. It is computed by first calculating a normalized continuous saliency map of the mesh M 𝑀 M italic_M using an off-the-shelf implementation in [21]. Then, we threshold the saliency map using Otsu’s method [32] to have binary per-vertex salient/non-salient scores. Next, for each watermark bounding box B i subscript 𝐵 𝑖 B_{i}italic_B start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, we compute a binary saliency vote for each bounding box indicating if it is placed on a highly salient area. Specifically, we assign a value of 1 if the average of thresholded saliency values within the box is greater than 0.5, and a value of 0 otherwise. The average value of saliency votes overall bounding boxes is reported as the saliency score.

Semantics Score (SS): SS is used to measure how well a model’s semantics is preserved through measuring the change in visual features after watermarking. To compute it, we start by generating renders of the target M 𝑀 M italic_M and watermarked mesh M′superscript 𝑀′M^{\prime}italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT by rotating the camera around the X 𝑋 X italic_X and Z 𝑍 Z italic_Z axes in 30° increments. Then, for each pair of corresponding 2D renders, we compute the cosine similarity between their feature vectors extracted using a pretrained ResNet50 [13]. Finally, we average these per-view cosine similarity scores over all views {(c w t,c o t)}t=1 T superscript subscript subscript superscript 𝑐 𝑡 𝑤 subscript superscript 𝑐 𝑡 𝑜 𝑡 1 𝑇{(c^{t}{w},c^{t}{o})}_{t=1}^{T}{ ( italic_c start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT , italic_c start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_o end_POSTSUBSCRIPT ) } start_POSTSUBSCRIPT italic_t = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT (taken at 30° increments around the X 𝑋 X italic_X and Z 𝑍 Z italic_Z axes) to obtain the final score.

Image 11: Refer to caption

Figure 10: Impact of mesh editing attacks. From left to right, the first figure shows the original object with no attacks. The second shows the effect of a severe decimation attack (strength = 0.9) and the third shows the effect of a subdivision attack (strength = 2). As seen the watermarks are slightly affected by the decimation attack but they are still visible. On the other hand, the watermarks are unaffected by subdivision attacks.

Image 12: Refer to caption

Figure 11: Impact of geometric attacks. From left to right, the first figure shows the effect of translation to a random position. The second shows the effect of rotation by a random angle and the third shows the effect of scaling the object. As shown, the watermarks move synchronously and are not affected by these inadvertent transformations.

Image 13: Refer to caption

Figure 12: Impact of Unauthorized Removal Attack: This figure demonstrates the effect of an unauthorized removal attack, where an attacker attempts to eliminate watermarks by deleting all faces and vertices associated with the watermark mesh. The left side shows the attack applied to Li et al.’s baseline method, where the complete watermark message cannot be read. Conversely, the right side showcases the attack on our proposed method, where we can still easily identify the watermark message, demonstrating its superior resilience compared to the baseline.

A.12 More Attacks Analysis

In Sec. 5.4, we presented a preliminary analysis of attacks and robustness. Specifically, we demonstrated the superiority of our method compared to the Li et al. (visible) and Wang et al. (invisible) baselines against three attacks: cropping, unauthorized removal (see Tab. 3 in the main body of the paper), and remeshing attacks (see Fig. 5 in the main body of the paper). In the following section, we extend this analysis and provide additional results.

We begin by analyzing the effects of unauthorized removal attacks on our approach and the Li et al. baseline, as detailed in Sec. 5.4. The qualitative results of this attack are presented in Fig.12. In this attack scenario, we assume a sophisticated adversary who can identify all vertices and faces of the watermarks and remove them using mesh editing software. This task can be quite challenging unless the watermarks are colored with a distinct color. As shown in the figure, even when the attacker knows the vertices and faces, watermarks remain clearly visible in our method due to the silhouette created by the holes. However, for the baseline method, since the watermarks may not fully touch the model surface due to poor orientation, the resulting silhouettes are partial, making the watermarks unreadable.

Next, we conducted tests on typical mesh editing operations such as decimation and simplification using visible watermarks (ours) and invisible watermarks (Wang et al.). We applied a decimation strength of 0.9 and a subdivision strength of 2. These values were chosen to be sufficiently high while ensuring that the visual integrity of the asset remains intact to a normal eye. The results are presented in Fig.10. In both attacks, the invisible watermark could not be successfully extracted (with less than 50% bit accuracy), whereas our method preserved the watermark well enough for the message to be clearly readable from multiple angles. Specifically, the clarity of watermarks degraded slightly under the decimation attack but remained completely unaffected by the subdivision attack. We observed that increasing the strength of the decimation attacks could completely erode the watermark, but at that point, the utility of the asset was also significantly degraded.

Lastly, we present qualitative results demonstrating inadvertent geometric operations performed in mesh editing software in Fig.11. For these operations, such as rotation, scaling, and translation, the watermarks are also transformed synchronously, hence the watermark quality remains unaffected.

A.13 More Qualitative Results

Fig.13 and Fig.14 show some visual examples of the models (from Manifold40 and ObjaVerse) watermarked with our method and the baseline. As shown, compared to the baseline, our method generates watermarks with significantly better placement, orientation, readability, and visibility from multiple views. On the other hand, the baseline produces watermarks that either fly out or are hidden under the surface.

Please note that we colored (i.e., adding texture) all the watermarks in Red in all the qualitative results for better observability for the reader. However, as also shown in Fig.15, the untextured watermarks still provide high visibility for the IP protection of the objects.

Moreover, three textured models (from the collected GenAI Meshy dataset) watermarked with our method and the baseline are illustrated in Fig.15. Similar to the visual examples related to human-made datasets in Fig.13 and Fig.14, our method generates watermarks with significantly better placement, orientation, readability, and visibility from multiple views compared to the baseline.

It should be noted that our method has been optimized to preserve the most salient features of the mesh without considering the texture information. As a result, for the textured models, it is possible that our method places a watermark on the areas that are visually seen as highly salient due to the presence of the texture (i.e., color information). For example, in the textured shark model in Fig.15, a watermark is placed near the eyes and nose of the shark. However, as also illustrated in the untextured version of the object, such details are not present in the mesh, and the selected area to emboss the watermark is smooth without any salient features.

A.14 Limitations and Future Work

Automated visible watermarking offers a practical framework for several critical scenarios, such as GenAI misuse, merchandise protection, and copyright violation. However, being the first work in this direction, it has several limitations that present significant opportunities for future research.

One significant concern revolves around how well our proposed benchmarks for watermark quality and asset utility align with human perception. The impact of visible watermarks on the perceived asset utility can vary significantly depending on the specific downstream application. Additionally, factors like viewing angle, texture, lighting conditions, and background complexity can influence how watermark quality is perceived in different contexts. This variability and subjectivity complicate the usability and reliability of our proposed metrics across all scenarios universally. Addressing these challenges represents an intriguing direction for future research.

Additionally, the robustness of visible watermarks against more intentional attacks poses a significant challenge in certain scenarios. A determined adversary may employ skilled 3D artists to manually remove watermarks and illegitimately sell or use the unwatermarked asset, violating the copyright of the owner. Although such effort will come at a significant cost, due consideration needs to be given to this possibility while employing this technology for practical purposes. Further, it would be an interesting direction for future work to explore and evaluate more sophisticated and automated attacks against our solution and propose a better and a resilient solution.

Further, our proposed solution works in four independent steps that are not end-to-end optimized for the most efficient performance. Specifically, we believe better performance can be significantly improved by combining the bounding-box fine-tuning and the filtering steps into one single optimization objective. However, this optimization can be challenging due to the various discrete operations in the gradient back-propagation process. We leave it to future work to solve these challenges and propose an improved solution that can further the state-of-the-art in this promising direction.

Finally, beyond technical considerations, visible 3D watermarking raises practical concerns regarding artistic integrity, where a fine balance needs to be maintained between content protection and user acceptance. Additionally, incorporating 3D visible watermarking in real-time or interactive applications imposes computational overhead that may impact performance and user experience. Addressing these multifaceted and practical challenges is essential for unlocking the full potential of visible 3D watermarking.

Image 14: Refer to caption

Image 15: Refer to caption

Figure 13: Two visual examples from Manifold40 showing 3D models watermarked with our method (left) and Li et al. baseline right). Ours provides better placement quality, readability, and viewability. We colored the watermarks in Red for better observability for the reader.

Image 16: Refer to caption

Image 17: Refer to caption

Figure 14: Two visual examples from ObjaVerse showing 3D models watermarked with our method (left) and Li et al. baseline (right). Ours provides better placement quality, readability, and viewability. We colored the watermarks in Red for better observability for the reader.

Image 18: Refer to caption

Image 19: Refer to caption

Image 20: Refer to caption

Image 21: Refer to caption

Figure 15: Two visual examples from GenAI Meshy showing textured 3D models watermarked with our method (left) and Li et al. baseline (right). Ours provides better placement quality, readability, and viewability.

Xet Storage Details

Size:
137 kB
·
Xet hash:
3c109728f4993207a26d0ccc3e1e4c3ea20c85255a9d3a512953b217e9de0bb7

Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.