A novel design of a visual communication system for animated character graphic pictures in a virtual reality environment is offered due to the lack of edge data identification in existing visual communication systems, which results in poor motion continuity of animated characters. This essay looks into the best ways to include graphic image processing into visual communication design. Using methods like picture processing, processing of visual images, and word processing through various processing tools, three design processes of refining the object, graphic image reorganisation, and graphic image refinement processing are used to complete the application of graphic image processing in visual communication design. The Sobel edge operator is also introduced to establish the application of graphic image processing in visual communication design. The motion of the animated character was created by simulation tests using the developed and existing systems, respectively. The effectiveness of the designed system is confirmed by examining the animated character’s motion grasping comparison chart, which reveals that the motion movements created by the designed system are more cogent and that the continuous effect of the position of the limbs in the motion is similar to that of a real person.
he ability of people to appreciate beauty is always developing, raising the bar for visual communication design. In the design of visual communication, graphic image processing is crucial [1]. As a result of this tendency, many nations have declared the animation sector one of their pillar industries [2]. Cartoon animation is increasingly becoming a significant form of entertainment and a tool for teaching. The animation business in China began off slowly but is now growing quickly and has achieved significant progress in the development and study of new animation production tools. The visual communication of animated characters’ graphic images has also been used in the creation of animation for the virtual reality environment [3]. From the original two-dimensional plane gradually progressed to the present three-dimensional presentation, such a change primarily depends on the visual communication system to create a better reflection of the overall effect of visual communication [4].
A significant area of creative advertising design is visual communication technology. In creative advertising design, virtual reality technology is utilised to integrate many aspects to expand the design language, optimise the layout, generate unexpected artistic effects, and improve the business environment [5]. Advertising practitioners must master both technology and content. They must be able to use the virtual scene in the virtual object model, materials, lighting, and other such elements to create a realistic virtual world, using a variety of association, leveraging, and other technical methods. The advertising industry is currently undergoing transformation. Virtual reality technology is being used to create visual effects that will further catch the audience’s attention and improve the advertising effect. New advertising formats and business models are also being developed [6].
Through multiple interface superiority, design, and detail optimisation processing of graphic images to achieve application software interface optimisation, interaction interface optimisation design is updated in terms of theme, sharing, and ease of development. This design is then incorporated into the conventional mobile phone system interface style, with custom theme colour schemes, set shape and font definitions, and theme editor functions [7]. In the process of gradually replacing human hands with machines, gradually developing painting software, art education and art colleges are also seeking new developments, graphic image design and visual communication technology continues to advance, and virtual technology brings a new revolution to the design of painting illustrations [8,9]. Virtual technology provides quite realistic technology, no longer a simple picture overlay processing mode, but an artificial intel The software offered by virtual reality technology can be used to colour and complement the outlines of draught designs based on hand-drawn graphic pictures in terms of light and dark changes and object matching.
The continuous impact of motion of animated characters is not satisfactory since existing methods do not capture and recognise edge data while creating graphic pictures of animated characters. In order to do this, a novel design for a visual communication system for graphic images of animated characters in a virtual reality setting is offered [8-9]. Utilising graphic components, picture elements, and colour elements in concert to create versatile visual communication works significantly enhances the impact of visual communication.
When processing animated images, the animated character graphic image visual communication system needs to employ a 3D engine, and the renderer is the essential component of the 3D engine [10]. The renderer employs a rasterization technique with an appropriate hardware architecture for rendering and is constructed based on the underlying graphics application programming interface (API) [11]. The two most widely used APIs are DirectX and OpenGL. The renderer motherboard must be designed, as shown in Figure 1, in order to adapt the APIs.
The model Huainan X79-8D dual-way large board was chosen to adapt to the renderer, the version is E-ATX, the number of memory slots on this motherboard is 8x DDR3, supporting 1 866/1 600/1 333 MHz memory [12]. Its SATA interface configuration is SATA 6 GB/s x 2 and SATA 3 GB/s x 4. The graphics card slot configuration is PCI-E 3.0 x 16 x 2. 24-pin power connector x 1. 8-pin power connector x 2. The number of power phases is 7+7 phases, the PCB thickness is 10 layers, and the graphics card is selected with an ALC887 7.1 graphics chip [13]. It is possible to make computer-generated graphics of animated figures appear hand-drawn using the renderer motherboard described above, optimising the visual communication experience and completing the hardware design of the graphic image visual communication system [14].
The margins of an animated character’s graphic can alter significantly during the outline phase. It is necessary to introduce the Sobel edge operator [15] in order to more accurately detect the image edge data. Let the grey scale function of the animated character graphic image be \(f\left( {x,y} \right)\) , where \(x,y\) is the horizontal and vertical coordinate where the pixel is located and \({G_x}\) denotes the gradient in the x-direction, we can obtain:
\[\begin{aligned} \label{e1} {G_x} = f\left( {x – 1,y – 1} \right) – f\left( {x – 1,y + 1} \right) + f\left( {x + 1,y – 1} \right) + f\left( {x + 1,y + 1} \right). \end{aligned}\tag{1}\]
The 3 x 3 gradient matrix in the grey scale function can be obtained from the corresponding pixel weighting factors in the gradient calculation Eq. (2):
\[\label{e2} \left[\begin{array}{ccc} -1 & 0 & 1 \\ -2 & 0 & 2 \\ -1 & 0 & 1 \end{array}\right]\left[\begin{array}{ccc} -1 & -2 & -1 \\ 0 & 0 & 0 \\ 1 & 2 & 1 \end{array}\right].\tag{2}\]
The system in this study chooses a threshold of 130 for discrimination [16]. When utilising the Sobel operator for edge recognition, the points with a gradient magnitude value surpassing a particular threshold are set as edge points. The following relationship exists between the recognition speed of the image’s contours and the pixel position:
\[\label{e3} {v_{k + 1}} = {v_k} + {c_1}{r_1}\left( {Pbes{t_k} – {x_k}} \right) + {c_2}{r_2}\left( {Gbes{t_k} – {x_k}} \right).\tag{3}\]
Because of its relatively coarse edge image at the single pixel level, the image needs to be refined, and the best available refinement technique is the Hilditch code [17]. The first step is to define the ’display operator rendering’ program: global proc resulotion Gate (), after some major steps to complete the animation character recognition and refinement of the edges of the graphical image.
The primary goal of an animated character’s motion simulation is to govern the character’s motion behaviour, which can vary greatly in complexity [18]. Figure 2 displays a typical model for regulating the behaviour of animated characters.
Based on the surroundings, behavioural decisions are made, and motion capture file production is controlled throughout this time. As illustrated in Figure 3, there is interaction between the file node, the file information node, and the motion capture module when handling the files of the motion capture module [19].
A file namespace exists within the external name of the motion capture file. The file namespace forms a mapping table, see Table 1.
Parent namespace ID | Namespace | Namespace ID | Parent namespace ID | Namespace | Namespace ID |
---|---|---|---|---|---|
– | – | 01 | 02 | Render3 | 06 |
01 | File | 02 | 02 | Render4 | 07 |
01 | other | 03 | 02 | Render5 | 08 |
02 | Render1 | 04 | 01 | File | 09 |
02 | Render2 | 05 | – | – |
The name of the namespace, its identification, and the identifier of the parent namespace pointing to the higher-level namespace make up Table 1’s three sections. The three components of the namespace identification each individually identify a namespace, and the entries in the directory create a linked table structure for decentralised storage for the parent namespace identification that points to the higher level namespace. Thus, the service of rendering cluster file information enhances the consistency of animated character movement during animation production as well as the dependability of information storage. This completes the design of the visual communication system for the graphic images of animated characters in the virtual world.
The image seen in the human eye is stored during actual storage mainly as a continuous image, and the corresponding two-dimensional image vector expression arithmetic is obtained, i.e.
\[\label{e4} g\left( {x,y} \right) = \left\{ {{g_r}\left( {{x_i},{y_i}} \right),{g_g}\left( {{x_i},{y_i}} \right),{g_b}\left( {{x_i},{y_i}} \right)} \right\}.\tag{4}\]
Where:\({g_r}\left( {{x_i},{y_i}} \right)\) is the vector function of the red primary color in the image signal; \({{g_g}\left( {{x_i},{y_i}} \right)}\) is the vector function of the green primary color in the image signal; \({{g_b}\left( {{x_i},{y_i}} \right)}\) is the vector function of the blue primary color in the image signal; \(g\left( {x,y} \right)\) is the signal expression of the original image; \({{x_i}}\) is the light intensity function in the image; \({{y_i}}\) is the spatial coordinate function in the image. Different primary colors correspond to different color channels in certain differences. For computer graphics images, when the color saturation is 0, it can be fully \({g_r}\left( {{x_i},{y_i}} \right) = {g_g}\left( {{x_i},{y_i}} \right) = {g_b}\left( {{x_i},{y_i}} \right)\) demonstrated . In this case, the computer graphics image appears in a grey state, i.e. a black and white image. With the help of this colour representation, an image partitioning model may be effectively built. In order to ensure that all components are held in the clustering centre safely and securely, the visual communication partitions are first unified and pooled [20]. Second, the combined visual data is used to scientifically regulate and control the distance between each visual communication partition and the cluster, and the final calculation results are immediately entered into the classifier using a histogram[21]. In order to minimise quantization mistakes, each clustering centre is encoded using coding. Additionally, in order to continuously improve the ability to discriminate discrete information, the initial discretization method is modified after encoding to a weighted processing method, which can considerably enhance the performance of picture recognition [22].
An experimental comparison was utilised to thoroughly test the performance of the approach in order to correctly confirm the validity and reliability of the visual communication design method. First, the dataset’s parameters were accurately established in accordance with the created graphic picture dataset. Next, four distinct types of images were found, 36 photos that resembled them were chosen, and the remaining four types of images were set. (1) Type A images. It is mainly used for a comprehensive collection of building decoration images. (2) Category B images. It is mainly used for a comprehensive collection of pet images. (3) Category C image. It is mainly used for a comprehensive collection of images of everyday household items. (4) Class D images. It is mostly utilised for an extensive library of pictures of natural settings. To create a sample of 10,000 photos, the aforementioned 4 photographs—a total of 40 images—were distributed at random among the 9960 images. This sample served as the data set. 2000 photos were taken from these datasets using a random sampling method, 2000 more were used as training samples, and the remaining 8000 images were utilised as test samples. During the precise characterization of the original image features, attention is paid to extracting the image scales using subsets and dividing them so that they are divided into \(\sqrt {2R}\) image blocks of radius, with an value usually taken as 16. In addition, all feature interval pixel data needs to be set to a uniform 2, at which point the area of all image scales is calculated as \(\sqrt {2R} \times \sqrt {2R}\) . If the feature distribution fully meets the criteria and requirements of the Gaussian distribution, the fixed scale can be uniformly set to a spatial pyramid scheme, and the program tool used is configured on a designated Personal Computer (PC) to achieve accurate training of the training samples.
In order to determine the effects of parameter changes on other indicators, as shown in Figure 2, it is important to concentrate on testing and analysing the method’s pertinent parameters in the text, such as the dictionary space capacity parameter of the feature space. Figure 4 illustrates how the algorithm’s accuracy tends to climb as dictionary space capacity increases. The accuracy of the algorithm tends to slow down as the dictionary size reaches 1024 MB and climbs as the number of feature space distributions does. The algorithm’s accuracy reaches its peak when there are 256 MB of feature space distributions. It is evident that the picture recognition accuracy rate will alter concurrently with changes in the dictionary space capacity and the quantity of feature space distributions. When these two parameters are zero, the image recognition accuracy of both reaches 67% and 73% respectively, indicating that the image recognition accuracy of both reaches the lowest value [23]. Retaining as much information as possible will considerably enhance the saturation of the feature space, ensuring that it can always operate at its peak performance. At this stage, the equivalent number of feature space distributions should be set to 256 MB if the dictionary space capacity is uniformly set to 1024 MB. In order to successfully balance image recognition accuracy and recognition efficiency as the image recognition accuracy increases and reaches its maximum value, it is necessary to evaluate and judge the optimal communication of the weight parameters, resulting in the spatially clustered image as shown in Figure 5.
As observed in Figure 5, the accuracy of the method will exhibit a “few” shakes as the weight parameter increases when the dictionary space and the number of feature space distribution are 1024 MB and 256 MB, respectively. The image identification accuracy increases to 89.59% when the weight parameter is set to 0.34.
Four types of images are chosen from the determined test set and united as the iconic samples for image recognition. The four types of image samples are then accurately detected and recognised in 800 test set images to obtain the image recognition results of the three methods. This allows for a better comparison of the image recognition effect between this method and the conventional PCA Net, linear discrimination method, and computer vision method.
Usage method | Class A image recognition accuracy | Class B image recognition accuracy | Class C image recognition accuracy | Class D image recognition accuracy |
Usage method | Class A image recognition accuracy | Class B image recognition accuracy | Class C image recognition accuracy | Class D image recognition accuracy |
Method of this article | 82.37 | 83.37 | 87.14 | 85.76 |
PCANET and Linear Discriminant Methods | 75.38 | 76.65 | 81.19 | 74.54 |
Computer Vision Methods | 73.76 | 72.87 | 79.30 | 73.75 |
Table 2 shows that when compared to the other 3 classes of images, Class C images have a reasonably good recognition accuracy. It can be observed that, in the context of the application of computer graphics image processing technology, building visual communication instances with the assistance of the visual communication design method suggested in this study can improve the ease of recognition of simple images.
Images were required for comparison in both the system’s construction and the presentation of the imaging effect. An animated character was utilised as the experimental item in the study, and screenshots were acquired at four randomly chosen time points to capture the character’s motion as it was created utilising the planned system and the existing visual communication system, respectively [24]. The motion images created by the current system were compared to those created by the system in this research in order to more intuitively depict the effectiveness of the system. The experimental findings are displayed in Figure 6.
To facilitate observation, the key motion nodes of the three-dimensional animated character are replaced using dots, with the four points of the left limb represented by 1 to 4, and the four points of the right limb represented by 5 to 8 respectively. Figure 6 illustrates how the designed system’s animated character moves in a cohesive manner. Lifting the left foot is the character’s first action. The second action is stepping forward with the left foot getting bigger. The third action is landing on the left foot. The fourth action is lifting the right foot and repeating the left foot action. The fifth action is the landing of the left foot. The posture of the limbs during the walking motion produces a continuous effect comparable to the movement of a real person. A succession of walking movements are portrayed in a cogent manner. In contrast, the old approach merely captures displacement images of animated characters, with no change in limb movement [25]. This proves that the method created for animated character graphic visual communication in a virtual reality setting is more effective at creating a continuum of motion in animated character graphic images [26].
In the colour design of graphic pictures, the RGB three primary colours are employed, whereas the CMYK four main colours are used in the design of printed works. Red, Green, and Blue (or RGB) are overlayed as pixel values on the display. Three main colours are combined with 256 brightness levels in RGB, which results in 1678 distinct colours. Cyan, magenta, yellow, and black make up the four colours of CMYK. From 0% to 100%, the colour changes. various display devices may produce various colour representations. As a result, it is typically set to 216 web security colours, which include 210 colours and 6 non-colors, in order to create visual homogeneity. The representation method is in hexadecimal format to avoid distortion due to different display conditions.
The three graphics in Figure 7 were created utilising graphic image processing technologies as design pieces for visual communication. Through the processing of graphic components, picture elements, and colours in the design of visual communication, the three works successfully enhance the visual communication effect.
This research provides a novel visual communication method for the graphic pictures of animated characters in a virtual reality setting. The purpose of simulation experiments is to create the animated character’s motion utilising the designed and existing systems, respectively. The effectiveness of the designed system is confirmed by an analysis of the animated character’s motion grasping comparison chart, which reveals that the animated character’s motion movements created by the designed system are more cogent and that the continuous effect of the position of the limbs in the motion is similar to that of a real person.
This study received funding support from the following sources:2023 Guangdong Construction Vocational and Technical College Campus level Research and Innovation Team Project: Digital Media Technology Research Team, Project Number: CXTD202307.