Abstract

Visibility observations and accurate forecasts are essential in meteorology, requiring a dense network of observation stations. This paper investigates image processing techniques for object detection and visibility determination using static cameras. It proposes a comprehensive method that includes image preprocessing, landmark identification, and visibility estimation, mirroring the observation process of professional meteorological observers. This study validates the visibility observation procedure using the k-nearest neighbors machine learning method across six locations, including four in the Czech Republic, one in the USA, and one in Germany. By comparing our results with professional observations, the paper demonstrates the suitability of the proposed method for operational application, particularly in foggy and low visibility conditions. This versatile method holds potential for adoption by meteorological services worldwide.

1. Introduction

Observing visibility is a critical component of aviation meteorology, representing a key safety concern that is closely monitored both by human observers and sophisticated instruments. The World Meteorological Organization [1] and the International Civil Aviation Organization [2] have standardized the observation of visibility. Visibility is influenced by factors such as the number of particles in the air, observed precipitation, the angle of the sun, or a combination of these factors. Forecasting visibility is challenging, making it one of the most difficult variables to predict accurately, and it heavily relies on an extensive and precise observation network. However, visibility can exhibit significant local variations, particularly in areas such as valleys near watercourses, vegetation, or human settlements. Therefore, it is essential to maintain a dense network of observation points to ensure comprehensive coverage. Operational meteorologists often leverage various sources, such as webcams, to identify significant local variations in visibility.

The significance of this topic is underscored by the considerable number of studies addressing the issue. These studies explore the analysis of moving camera images or provide a comprehensive assessment of image conditions, including clouds, phenomena, and surface conditions. For instance, a research paper conducted by Minnan Normal University [3] addressed the technical aspect of assessing the degree of image fogging, which could later be employed to identify boundary scenarios where only the contour of an object is visible. Another study [4] focused on extracting the fog effect from images to recognize objects such as road signs or lane markings in traffic. The authors aimed to develop a real-time processing algorithm applicable to moving camera-based traffic scenarios. Although these studies do not directly determine visibility values, they offer intriguing technical insights into handling foggy images.

In their work, the same authors [5] primarily concentrated on measuring visibility in fog using cameras installed in moving vehicles, applying the principles of Koschmieder’s law. They divided the problem into two or three subproblems to be addressed progressively, with a particular emphasis on daytime fog conditions. Previous studies [3, 5] have utilized images captured under specific lighting conditions, predominantly during daylight hours. However, in practical situations, we encountered varying lighting conditions, including daylight, nighttime scenes, and transitional periods such as dawn and dusk, where visibility can be perceived differently.

The work conducted by Palvanov and Cho [6] goes beyond previous studies by proposing a visibility forecasting procedure that leverages deep integrated convolutional neural networks. They trained their model using an extensive dataset comprising 3 million outdoor images. The authors claim that their approach achieves superior performance compared to classical visibility prediction methods. However, it is important to note that their work is restricted to daily imagery, which imposes significant limitations on the scope of the research problem.

A study closely related to the research presented here has been published focusing on the Alaska region [7]. The authors aimed to determine the prevailing visibility and assimilation possibilities using a method that involves processing images captured by 360-degree range cameras. Their approach closely resembles the work conducted by professional aviation observers. The authors also concluded that camera observations are particularly effective in monitoring very low visibilities, specifically under conditions of low instrument flight rules and instrument flight rules.

The research question formulated for this study aims to investigate the feasibility of using ordinary cameras, whose outputs are accessible on the Internet, for reliable visibility observations in order to enhance the density of the station network. The objective is to identify suitable image processing methods for determining visibility during both day and night and to establish a methodology for quantifying visibility. The proposed method must meet the requirement of providing clear and interpretable results, ensuring its potential presentation to the meteorological community and operational usage.

The purpose of this research is to utilize these image data in a manner consistent with the practices of professional meteorologists and propose an automated method that can replace their efforts across a dense network of webcams. The proposed method should be applicable, have low computational requirements, and allow for easy modification if necessary.

2. Data and Processing

The reliable assessment of visibility for an object in a camera image necessitates the fulfillment of specific requirements, which include the following:(i)Static camera: The camera should be stationary, ensuring that the object being observed does not change its position within the frame.(ii)Reliable provider: The camera’s provider should be dependable, guaranteeing that the acquisition date of the image is accurate and promptly addressing any malfunctions that may occur. Any issues encountered should be reported and resolved promptly.(iii)Object stability: The object being observed should not undergo significant changes over time. This includes factors such as the object not moving, remaining consistent throughout different seasons (e.g., trees or seasonal installations), and not being obscured by other objects (e.g., traffic signs or lower portions of buildings).(iv)Night visibility: The object should be visible at night, either due to illumination or the presence of lights, enabling observation during nighttime conditions.(v)Object location on the map: The ability to locate the object accurately on a map is necessary to obtain the distance between the camera and the object.

These conditions are often fulfilled by high-rise buildings, as they typically possess aviation safety lights and undergo fewer alterations compared to residential buildings. Images from static cameras are commonly provided by national meteorological services such as the Czech Hydrometeorological Institute (CHMI) or the German Meteorological Service (Deutscher Wetterdienst, DWD).

Four webcams in the Czech Republic were used in this study (provider: CHMI, https://www.chmi.cz/files/portal/docs/meteo/kam/, last access: 20 February 2023, CHMI), one web camera from Salt Lake City, Utah, USA, (provider: University of Utah, https://home.chpc.utah.edu/~u0553130/Camera_Display/wbbs.html, last access: 20 February 2023, University of Utah) and one DWD camera located in Offenbach am Main, Germany and one camera operated by the German Weather Service (DWD), located in Offenbach am Main, Germany (provider: DWD, https://opendata.dwd.de/weather/webcam/Offenbach-W/, last access: 20 February 2023, Deutscher Wetterdienst).

By adhering to these requirements, the research aims to establish a reliable method for assessing visibility using static camera images from selected locations.

2.1. Imagery Processing Options

From a meteorological perspective, several objectives can be set to analyze visibility in camera images. These objectives include the following:(i)Determination of phenomena reducing visibility: Identify and analyze specific weather phenomena that contribute to reduced visibility, such as fog, haze, rain, or snow. This involves detecting and quantifying these phenomena within the camera images.(ii)Precise visibility determination: Develop a method to accurately measure and quantify visibility in the camera images. This requires comparison to real professional and verified observations.(iii)Shading of landmarks: Assess the impact of shading on the visibility of landmarks or prominent objects within the camera images.(iv)Determination of visibility limits: Determine the maximum range of visibility in the camera images, taking into account factors such as atmospheric conditions, lighting, and object characteristics.

From a broader image analysis perspective, the tasks can be divided into three basic situations:(i)Unreduced visibility: Analyze images where visibility is not significantly reduced. These images serve as reference points for evaluating visibility conditions and establishing baseline characteristics of the scene.(ii)Reduced visibility: Analyze images where visibility is visibly reduced due to atmospheric conditions. This involves quantifying the extent and severity of the visibility reduction, providing insights into the presence and intensity of weather phenomena.(iii)Night: Analyze images captured during nighttime conditions. This entails developing techniques to handle low-light conditions, ensuring proper visibility assessment and analysis during nighttime hours, when the scene is completely different and landmarks might be invisible.

In line with the observer’s approach when visually assessing visibility, the following three parameters play a crucial role:(i)Light (brightness)(ii)Color range (contrast)(iii)Edge recognition

These parameters form the basis for exploratory analysis and preprocessing of the data, as they help uncover relevant patterns and variations in the camera images.

2.2. Color Histograms

From a general perspective, certain assumptions can be made about the appearance of camera images under different visibility conditions:(i)Unreduced visibility: In images with unreduced visibility, colors are expected to be vibrant, diverse, and exhibit a wide spectrum. The image will contain a range of colors representing various elements within the scene.(ii)Reduced visibility: In images with reduced visibility, colors tend to be predominantly grey. In specific cases, such as sand obscuration, the dominant color may lean towards ochre. The overall color palette will be limited and subdued due to the presence of atmospheric particles or weather conditions that hinder visibility.(iii)Night: Nighttime images will primarily consist of dark or black scenes with prominent bright areas. The contrast between dark and bright regions may be more pronounced. The limited amount of available light during nighttime conditions can lead to a different distribution of colors in the scene.

In order to provide a basic overview of the color representation under different visibility conditions, histograms of color distribution in RGB space have been created. These histograms can offer insights into the prevalence and distribution of colors within the image. Figure 1 displays the histograms of color representation based on an example image from Brno, Czechia (as shown in Figure 2).

The histograms exhibit certain readable characteristics, but they can also be ambiguous and challenging to interpret on their own. Thus, it is important to acknowledge that histograms are an essential part of the analysis but cannot solely serve as a classification parameter. They provide a general understanding of the image’s distinct characteristics before undergoing preprocessing. In more intricate procedures, histogram analysis can serve as an initial classification method, enabling the application of different procedures tailored to specific situations such as day, dawn, night, or other scenarios that can be effectively captured by the histogram data [24].

2.3. Color Scale Operations

In general, the visibility observation relies more on the contrast of objects rather than their specific colors. Therefore, it is important to explore techniques that can enhance contrast, such as isolating single channel values, inverting colors, filtering raster values, or adjusting the image’s color properties [22]. In addition, the method should be applicable for both daytime and nighttime conditions.

As previously mentioned, visibility observations primarily involve identifying objects with optimal contrast against the background sky. Therefore, methods that maximize contrast regardless of sky conditions are preferable.

One approach to enhancing contrast is contrast stretching, which scales the image values from a limited interval to the entire scale of the color band [8]. Several contrast accentuating methods were tested and evaluated (see Figure 3).

Overall, these techniques aim to improve the visibility and distinguishability of objects by enhancing their contrast against the sky background.

The purpose of using these methods was to identify suitable techniques for separating the sky and the horizon or objects on the horizon. From the analysis, it appears that the second and third methods (contrast stretching and histogram equalization) exhibit promising potential. However, it should be noted that these two methods are quite similar in their effects. On the other hand, adaptive equalization did not demonstrate significant contrast differentiation. Moreover, histogram equalization amplified noise in the image, which could be problematic in scenarios involving precipitation, mist, or reduced lighting conditions.

Another method examined was image decolorizing, which involves converting the image to grayscale. This technique can be useful for enhancing contrast between elements such as the sky, buildings, or light, as it eliminates differences in color bands. Consequently, the resulting image primarily focuses on brightness and contrast.

To effectively address contrasting surfaces, the power law transformation method was used. This technique adjusts the image to align more closely with human visual perception, resembling the observations made by human observers [9]. This is due to the nonlinear relationship inherent in human eye perception, contrasting with the linear relationship associated with camera lenses. In this case, two specific methods were tested: logarithmic and gamma transformations. Gamma correction is a nonlinear operation that is used to encode and decode luminance in a manner that aligns with human visual perception [10]. The transformation function of gamma correction can be expressed by the following equation:

It represents the transformation of pixel intensity values, denoted by V, for output, maximal, or initial values. The gamma power parameter, chosen by the user, is utilized in the transformation.

Gamma correction has been shown to help separate objects on the horizon and increase their contrast (Figure 2). This is especially useful when viewing an object against the sky. Thus, the method appears to be suitable for extracting landmarks.

2.4. Edge Detection

To complement the image decoloring and histogram modification methods, it is important to incorporate techniques for detecting visibility edges and areas with contrast variations. In this regard, several readily available methods in Python packages can be utilized.

One effective method for object detection is contour detection based on the marching cubes algorithm [11]. For testing purposes, the skimage.measure.find_contours method [8] was used, and the results were visualized using separate colors in Figure 4.

The results obtained using skimage.measure.find_contours show promise, even during dawn hours when the nature of objects transitions from being visible to detectable by light. The method effectively separates contrasting objects, as evident from the picture. However, it is important to note that the performance of this method is highly dependent on lighting conditions and may yield different results at varying angles of the sun or moon [23].

Considering the previously mentioned preprocessing methods where the image values were converted to greyscale, the next step is to test the edge detection procedure. This approach capitalizes on the characteristics of the preprocessed image and identifies areas with high contrast. The Sobel (also known as Sobel–Feldman) or Roberts operators appear to be suitable choices for this purpose. These operators enhance the detection of edges and provide valuable information for further analysis [12].

Although the results of both the Sobel and Roberts operators appear similar (Figures 5(a) and 5(b)), it is evident that the Sobel–Feldman operator effectively distinguishes edges within the image. However, it is important to highlight that without prior preprocessing steps such as greyscale conversion or histogram equalization, the Sobel edge detection method would be challenging to apply to night imagery. Figure 5(c) specifically showcases the results obtained solely through the Sobel edge detection technique.

Figure 5 demonstrates the effectiveness of the Sobel operator in highlighting edges and detecting areas of contrast. By incorporating appropriate preprocessing techniques, the Sobel–Feldman method can be utilized to enhance edge detection and facilitate further analysis, particularly in daylight or well-lit conditions.

Testing the filter on unadjusted images shows a useful procedure for object detection in day and night. Edges are detected better in black and white images with increased contrast.

3. Object Detection

The primary objective of this research is not solely limited to image processing but rather focuses on determining visibility conditions and identifying the current situation depicted in the image. This includes detecting phenomena such as fog, mist, or nighttime fog. A significant challenge in classifying the entire image dataset arises from the potential variability of objects present. Factors such as changing facades of buildings, moving vehicles, and temporary obstructions necessitate the identification and tracking of reference points within the image. The proposed preprocessing method will be evaluated on these landmarks to assess its effectiveness in detecting them both during the day and at night.

This approach aligns with the workflow of a weather observer and offers advantages in terms of intervention and classification. Unlike neural networks, we can actively participate in the classification process by identifying multiple landmarks and determining if any changes or disappearances have occurred. This level of control is possible due to our precise knowledge of the specific landmarks being tracked.

3.1. Object Determination

For testing image from Brno camera, the building on the left side was chosen as a visible landmark (Figure 6). The chosen location meets the specified criteria of visibility, being well-lit and unlikely to be obstructed by temporary objects. However, the measured horizontal distance of 1500 m falls below the desired threshold from a meteorological standpoint. Nonetheless, it still provides valuable information regarding visibility reduction.

In Figure 7, the effectiveness of the preprocessing techniques can be observed. The image is transformed through greyscaling (using the viridis colormap for improved contrast), gamma adjustment, and Sobel–Feldman filter for edge detection. These methods successfully detect object features even in conditions of both good and poor visibility. In addition, the image texture undergoes significant changes in situations of reduced visibility, resulting in a distinct texture that can be effectively captured by the algorithms.

By employing this method, points will be identified on each tested image using a consistent procedure. As the reference images exhibit relatively clear characteristics, they can be utilized for classifying the tested images. However, this approach requires the supervision of an individual who must determine appropriate coordinates of the object within the image. This step is crucial as the object needs to be consistently cropped with the same margins in order to achieve the highest possible accuracy.

3.2. Conditions Classification

A fully supervised classification approach could be employed using the image subtraction method [8]. In this method, the resulting histogram or image should ideally be an empty array or consist solely of black color values. This indicates that there is no significant difference between the tested image and the reference image, allowing for accurate classification (Figure 6).

The method demonstrated is generally effective under normal conditions. However, variations in lighting or other changes in appearance could lead to higher values in the histograms. Therefore, using image subtraction only, it becomes necessary to establish threshold values to distinguish between random influences and changes in weather conditions.

3.3. K-Nearest Neighbor Classification

By selecting a segment from the images, we obtained representative reference images. This allows us to use the K-nearest neighbor machine learning classification method. The K-nearest neighbor method is highly efficient, with a short training time, and is easily interpretable by all users [25]. It is widely utilized in various domains, including monument care [13, 14], medical science [15], or financial analysis [16].

The method aims to identify the most similar set of points (objects, in our case vectors, since the matrix of pixels is transposed into one dimension) from a training set of labeled objects [17]. Based on a specified parameter, a certain number of nearest points (nearest neighbors) are selected for consideration (Figure 8). A distance calculation is performed to determine the nearest neighbors. When the parameter k is set to 1, the method classifies based on only the single nearest neighbor.

Due to the significant differences among individual image segments, the parameter k was set to k = 1. In situations where we have more controlled labeled data, a higher value for k could be utilized. This would be beneficial when the object’s appearance varies under similar conditions, such as changes in illumination during nighttime. However, it should be noted that there is no assumption that these highly distinct images (e.g., clear, foggy, or nighttime) would be misclassified.

4. Proposed Method

In accordance with WMO regulation number 8 [1], each station is required to create a plan detailing the objects used for observation, including their distances and bearings from the observer. This requirement was crucial and led to the establishment of a comprehensive workflow procedure, consisting of four fundamental steps (Figure 9): data acquisition, preprocessing, object segmentation, and result determination.

It is important to highlight that the manual aspect of the work does not have to be laborious. For instance, Python provides the plotly package [18] for interactive image visualization, allowing the display of frame coordinates as tooltips when hovering the cursor. This facilitates the swift determination of object coordinates within the image.

After thorough review, testing, and careful consideration of the strengths and limitations of each annotated method, the subsequent processing procedure was established (Figure 10).

The entire process is characterized by its transparency and simplicity. The code is concise and easily adaptable, allowing for convenient modifications as needed. In addition, the process is highly efficient and can be further optimized through pretraining and model saving techniques. These enhancements contribute to a streamlined workflow and improved time efficiency.

5. Results

The proposed method underwent testing on various objects located in different areas. In addition to the Brno campus building, which was previously used as an example, three other images from Skalky, Kobylí, and Klínovec were utilized for testing purposes. These images had a resolution of 1600 × 1200 pixels. Furthermore, two foreign cameras in Salt Lake City, US, and Offenbach am Main, Germany were employed for special testing and validation in conjunction with regular meteorological reports. The individual images of the first selected landmark, a radio tower situated 1500 m away from the Skalky radar station, are displayed in Figure 11.

The third station, located in the village of Kobylí, South Moravian region, did not have any lights. However, the image preprocessing method demonstrated remarkable success under all conditions. As orientation points, the power line post and the cycling path were selected (Figure 13(a)). Their visibility under nighttime conditions is illustrated in Figure 12. It is worth noting that being situated at the edge of the village, there might have been some residual light rays aiding in the identification of individual objects.

In the event of classification failure during the testing period, an alternative preprocessing filtering method can be applied for this specific case. Since the power line column is vertical, the Prewitt vertical filter (Figure 13(c)) can be utilized, which is capable of detecting objects that are particularly challenging to observe. The Prewitt filter is specifically designed for extracting the vertical gradient using a 3 × 3 kernel. Its results indicate how the image changes at a specific point, providing insights into the presence and orientation of edges [20].

In Figure 12(d), there are visible lights; however, these lights are not consistently turned on, making them unsuitable for classification. They can be considered as a backup point in case the lights are on.

If the classification process is successful, there is no need to incorporate an additional branch of the algorithm to apply the Prewitt filter. However, it is always important to have contingency options available, particularly in the field of meteorology.

Moving on to the fourth station, Klínovec ski resort, the landmark chosen for observation was the cable car column (Figure 13). Despite being located at a distance of 200 m, which limits visibility, this case effectively demonstrates that image preprocessing can successfully resolve objects against the sky background when there is residual light present.

The classification testing was conducted on the available landmarks, with an additional labeled training image featuring a cable car seat hanging in front of the column to account for changes in its shape. However, it is expected that the column without the seats would be the most similar nearest neighbor.

5.1. Testing Classification

Given the high resolution and visibility of the landmarks in the classified images (from the training set), extensive testing of similar situations was not necessary. It is assumed that the classification will accurately evaluate most common scenarios but may incorrectly assess situations with significant changes in the appearance of the landmarks. To simplify the classification process, the following categories were established, considering the varying distances to the landmarks:(i)Good visibility during the day (code: OK)(ii)Good visibility at night (code: ON)(iii)Mist (code: BR)(iv)Fog (code: FG)

While the use of meteorological terminology has been partially abandoned, with fog traditionally defined as visibility below 1 km, it was assumed that full obscuration of a landmark at a distance of 1500 m indicates low visibility. The classification output primarily indicates the presence or absence of the landmark. The test set imagery was carefully selected to encompass all the different situations for thorough evaluation. The classification categories depend on the quality of the observed points or the absence thereof. The results of the KNN classification for k = 1 are presented in Table 1.

It is indeed a notable achievement that the classifier attained a 100% success rate. However, it is crucial to recognize that the effectiveness of a classifier relies heavily on the quality of its training set. Situations such as building reconstruction, camera malfunction, or the absence of lighting (which is uncommon in high-rise buildings for safety reasons) can potentially pose challenges for the classifier’s performance.

One significant advantage of the current approach is that the objects are selected by a human operator. This human involvement ensures accuracy and helps mitigate potential inaccuracies. While the introduction of automatic detection or increased autonomy may reduce the amount of human work required, it could also introduce more inaccuracies. Hence, the current level of human intervention provides a valuable advantage in maintaining the reliability of the classification process.

5.2. Complex Situation Testing

The next image tested was obtained from a south-facing camera operated by the Department of Atmospheric Sciences at the University of Utah, Salt Lake City, as part of the Mesowest project [21]. This camera captures images with a resolution of 1280 ×  960 and features both illuminated objects and a relatively rugged horizon.

Two main landmarks were selected for testing: the Social and Behavioral Sciences building located 500 meters away and the Rice-Eccles Stadium situated within 650 meters (Figure 14). In addition, a third landmark, the suburb and ridge located 5 kilometers away, was chosen to assess potential mist conditions. These objects were tested to determine the three categories of fog, mist, and good, which do not differentiate between day and night as all three segments contain illuminated objects.

Each frame in the image was assigned a number and name, along with visibility values for night and day conditions. The visibility status indicates whether the object is visible or not, and the corresponding phenomenon it represents, such as mountains obscured (HOBSC), mist (BR), or fog (FG). In addition, the position of the boundary pixels was recorded (Table 2).

In the classification process, mountain ridges were assigned a visibility value of zero as it represents the lowest guaranteed visibility. Therefore, when mountain peaks are visible, the correct statement is that the visibility is greater than or equal to 0 m.

Despite the addition of this information, the classification process remained unchanged. All points were classified in all images, but the results were not considered if the landmark visibility was not reliable.

The algorithm produced only two misclassifications out of 80 tested images (from November 2022 to February 2023) that covered all the target classes. In these cases, the algorithm failed to recognize the building as visible, despite being manually labeled as such. However, these two errors are subject to debate and require further analysis (refer to Figure 15 for visual reference).

These classification errors can be considered marginal situations since the absence of illumination at the landmark can be attributed to the low quality of the observation point. In the latter case, where the visibility is borderline, even a human observer may estimate the visibility marked by the building to be less than 500 m. This highlights the need for a coherent methodology for labeling.

Considering the distance between the observation site and Salt Lake City International Airport (12 km), a manual comparison revealed significant differences in the observed visibility due to varying local conditions. In addition, the airport’s location to the west of the south-facing webcam contributed to the contrasting results.

Although the automatic comparison of observed visibility and webcam results was deemed unrepresentative, the relatively promising outcome encouraged further research to enhance the observations by introducing new classes of objects.

Given the captivating nature of the scene, the observer naturally contemplated other orientation points (frames 1, 2, and 5 in Figure 14). Consequently, the possibility of observing additional phenomena and situations was explored. Examining the image, it becomes evident that there may be instances where the mountains are obscured while visibility remains high, or where the base of the mountains is obscured but the peaks remain visible, among other complexities. This introduces additional characteristics to consider in more intricate scenarios.

Out of the 80 images used, only one new error occurred. One image from early night was mistakenly classified as a good visibility situation, although it is more likely to be a mist (Figure 16).

The low quality of the landmark area should be taken into consideration in the analysis. To determine the mist at night, the lights in the suburbs were used as a reference. Therefore, it would be more appropriate to use this landmark only during the day. Alternatively, the classification model should be refined and trained more accurately for comparable situations.

Regarding this camera, it is important to note that the image quality is not high, and the quality of the observed points, especially at night, is also limited. However, the diverse nature of the scene contributes to the identification of different phenomena in the daylight. It can be considered a success that out of the 80 cases tested, there were only three errors in the full testing scenario, and only one error when using only high-quality landmarks.

5.3. Standard Reports Visibility Comparison

A camera located in Offenbach am Main, Germany, was selected for testing the visibility classification against standard weather reports. This camera is operated by the reliable German Weather Service (DWD) and is oriented towards the west, capturing views of Frankfurt am Main city center and the International Airport (ICAO code: EDDF). The horizon in the camera’s field of view offers prominent landmarks in the form of high-rise buildings, allowing for direct comparison with official observation reports issued at the airport. In this evaluation, METAR reports were utilized for comparison, which are issued twice an hour (at the 20th and 50th minute), providing a test set comprising approximately 190 measurements.

To serve as orientation points, the Central European Bank building (distance = 3 km; positions in the original imagery: 1502, 1705, 1997, and 2099) and the Commerzbank building (distance = 5 km; positions in the original imagery: 1549, 1699, 1935, and 1985) were chosen (Figure 17). It should be noted that one potential complication when examining the nighttime images is the slight blurring caused by the pollution. However, this blurring should not significantly affect the differentiation between fog and good visibility.

In the initial testing phase, where only one image was used for training and each situation (visible/obscured; day/night) had four corresponding images, errors were observed in the classification of boundary situations in the test set from February 15th to 18th, 2023 (Figure 18).

For the second testing, the model was retrained and the erroneous image was added to the training set to correct the misclassification. After this adjustment, only one mismatched value remained. This discrepancy occurred on 16th February 2023 at 0:50 UTC (METAR time code 202302160050), where the reported visibility from the airport was 3400 meters, but it was manually verified to be lower in the city. This observation further highlights the algorithm’s potential benefit in recognizing different visibility values from various locations.

Regarding the second building (Commerzbank), which serves as the 5 km threshold, the same testing approach was applied. There were more instances of discrepant values, primarily on 16th February during the day, but the majority of them were correctly classified. Hence, the algorithm successfully detected the difference in visibility compared to the airport. The only notable error occurred in the night image of 17th February at 20:20 UTC, where the character of the lights may have changed due to heavy pollution or alterations in the city center’s lighting (Figure 19).

It is interesting to note that the algorithm rated the image as more fog-like due to the lack of contrast between the building and its surroundings. Despite being the tallest building in Frankfurt, choosing another landmark that contrasts more with the background, such as the sky, would have yielded better results.

The near error-free determination of object visibility can be considered a success, as there was only one error in three days of observation during three tests (two for the Central Bank building and one for the Commerzbank building). It is worth mentioning that this error can be rectified through model retraining or by selecting a different landmark for classification.

6. Discussion and Conclusions

The proposed research aimed to explore the use of static cameras for visibility determination in meteorology. The methodology involved several technical steps, including image decolorizing, Gamma correction, the Sobel–Feldman filter, and the KNN classifier. In addition, several other preprocessing methods are demonstrated to explore additional possibilities and enhance the applicability of the proposed approach across various conditions. Steps of preprocessing and object identification were carefully selected to mimic the process of human professional visibility observation and to achieve accurate results.

Evaluation of the method was conducted at various locations using cameras provided by different institutions, such as the National Weather Institute (CHMI) in the Czech Republic and the German Weather Service (DWD) in Offenbach am Main. The results demonstrated a high success rate in scene determination, indicating the suitability of the method for visibility determination using static cameras.

Alternative approaches, as discussed in the relevant literature, include the use of neural networks, intensity evaluation of detected edges, and contrast assessment, among others. These approaches offer potential avenues for further research and may yield alternative insights and improvements to visibility determination using static cameras. It is worth noting that while the proposed method requires initial human effort in data labeling and object selection, this manual supervision enhances the algorithm’s reliability, making it particularly suitable for aviation purposes. Importantly, it should be highlighted that a subset of objects chosen for method validation was located within a 500 meter radius, including the Kobylí post, Klínovec Cable line, or SLC Social Studies building. The selection of these landmarks was particularly significant due to the criticality of accurately detecting very low visibilities. Notably, the proposed method demonstrated reliable visibility determination even under such challenging conditions, with outcomes contingent on the quality of the respective landmark.

In Table 3, an overview of the advantages of the proposed method is provided. These advantages include its close resemblance to the process of human professional visibility observation, applicability in day and night conditions, the possibility of using multiple objects, distinct categories with low error probability, a highly understandable process, adjustability for each individual station, low memory and computing demands, and short training and prediction (classification) time. However, the proposed method does have limitations, as summarized in Table 4.

To address the aforementioned disadvantages, potential solutions can be explored in further research. While certain limitations related to landmark quality, light conditions, and positioning may be inherent, other issues can be mitigated.

Dealing with varying threshold values can be partially resolved through improved visualization and interpretation techniques. For instance, certain points may be disregarded during nighttime when they are not discernible. In addition, users could have the flexibility to set a threshold distance according to their specific needs. Once this threshold is reached, the output can signify visibility falling below the set threshold.

To alleviate processing-related drawbacks, an automatic algorithm could be developed to suggest landmarks based on image recognition or geographical methods. This would reduce the need for manual landmark selection and improve efficiency.

Furthermore, the utilization of moving cameras could enable the detection of extreme situations such as dense fog. While the reliability level may be limited, the algorithm could also identify areas with notably high grey color content in the image, indicating potential visibility challenges.

These proposed measures aim to address the mentioned disadvantages and enhance the overall performance and versatility of the method. Further research and development in these areas can lead to improved outcomes and expanded capabilities.

In conclusion, the proposed algorithm offers a practical solution for visibility determination using static cameras. The research has demonstrated the method’s applicability in day and night conditions, the ability to use multiple objects, and its low memory and computing requirements. The advantages outlined in Table 3 confirm its potential as a reliable and efficient visibility determination tool. However, certain limitations identified in Table 4, such as directionality and threshold variations, highlight areas for further improvement.

To enhance the method, future work could focus on incorporating a larger number of quality control points in the data, expanding the training dataset for improved accuracy, and addressing challenges related to result visualization and interpretation. The algorithm will be deployed in the application phase, where efforts will be made to optimize the visualization process and overcome any interpretational challenges. Overall, the proposed method makes a valuable contribution to visibility determination in meteorology, providing practical utility and opportunities for further enhancements in the field.

Data Availability

The imagery data used to support the findings of this study have been retrieved from CHMI: https://www.chmi.cz/files/portal/docs/meteo/kam/, last access: 20 February 2023. For an image from a specific station: Brno: https://intranet.chmi.cz/files/portal/docs/meteo/kam/prohlizec.html?cam=brno; Kobylí: https://intranet.chmi.cz/files/portal/docs/meteo/kam/prohlizec.html?cam=kobyli; Klínovec: https://intranet.chmi.cz/files/portal/docs/meteo/kam/prohlizec.html?cam=klinovec; University of Utah: https://home.chpc.Uutah.edu/~u0553130/Camera_Display/wbbs.html, last access: 20 February 2023; and DWD: https://opendata.dwd.de/weather/webcam/Offenbach-W/, last access: 20 February 2023. The complete code will be provided by the author upon request.

Conflicts of Interest

The author declares that there are no conflicts of interest.

Acknowledgments

I would like to acknowledge all the providers of camera imagery from all three states, my home institution for providing observational data and making the research possible, and Prof. John D. Horel, the University of Utah, Atmospheric Sciences Department, for hosting the research. In this context, I would also like to acknowledge the Fulbright Czech Republic organisation that made the cooperation possible. This work was supported by the Project for the Development of the Organization, DZRO Military Autonomous and Robotic Assets.