Abstract

Power generation through solar photovoltaics has shown significant growth in recent years. However, high penetration of solar PV creates power system operational issues as a result of solar PV variability and uncertainty. Short-term PV variability mainly occurs due to the intermittency of cloud cover. Therefore, to mitigate the effects of PV variability, a sky-image-based, localized, global horizontal irradiance forecasting model was introduced considering the individual cloud motion, cloud thicknesses, and the elevations of clouds above the ground level. The proposed forecasting model works independently of any historical irradiance measurements. Two inexpensive sky camera systems were developed and placed in two different locations to obtain sky images for cloud tracking and cloud-based heights. Then, irradiance values for onsite and for a PV site located with a distance of 2 km from the main camera were forecasted for 1 minute, 5 minutes, and 15 minutes ahead of real-time. Results show that the three-level cloud categorization and the individual cloud movement tracking method introduced in this paper increase the forecasting accuracy. For partially cloudy and sunny days, the forecasting model for 15 min forecasting time interval achieved a positive skill factor concerning the persistent model. The accuracy of determining the correct irradiance state for a 1 min forecasting time interval using the proposed model is 81%. The average measures of RMSE, MAE, and SF obtained using the proposed method for 15 min forecasting time horizon are 101 Wm-2, 64 Wm-2, and 0.26, respectively. These forecasting accuracy levels are much higher than the other benchmarks considered in this paper.

1. Introduction

With the declining prices and promotion of green energy, power generation using solar photovoltaic (PV) progresses to be an alternative variable power generation option in many countries [1, 2]. For example, as of 30th September 2020, there are over 2.56 million PV installations in Australia, with a cumulative capacity of more than 18.5 GW [3]. Furthermore, solar PV was accounted for 5.6% of the total generation in 2019, and it is the fastest-growing generation type in the years 2018 and 2019 [4].

However, due to solar PV variability and intermittency, the increased penetration of solar PV into the power system can cause operational and management issues. The solar irradiance on PV panels varies with date, time, location, and panel orientation relative to the sun [5]. It is well known that the diurnal and annual solar irradiance patterns are highly predictable, and solar variability in longer time intervals can be easily estimated. But, the amount of solar irradiance that reaches the surface of the earth varies by the intermittency of cloud cover, impacting short-term solar energy production. As these variations create significant fluctuations in solar power feed into the grid, the methods that can be used to predict solar irradiance at ground level and thus the corresponding PV power generation are necessary to ensure the effective management of electrical grids [68].

Reference [9] categorized solar PV forecasting methods based on forecast time horizon (short-term, medium-term, and long-term), historical data, and forecasting methods. Historical data-based PV forecasting models use PV power output and related meteorological variables as the inputs. Persistence model, physical models [10], support vector machine-based models [11, 12], and artificial neural network-based models [13] are some of the forecasting methods used to forecast PV power in different forecasting time intervals

Solar power/irradiance forecasting is a powerful tool that can be used to mitigate problems associated with the short-term variations of the generated solar power. Intrahour (from few seconds to few minutes) forecasting is used to identify local ramp up/down events few minutes in advance [14] and can identify pending energy shortfalls, which are helpful for managing PV inverters and energy storage systems effectively. PV fluctuations create issues in maintaining steady-state voltages at the distribution grid within the statutory limits [15]. If a large number of PV plants are connected to the distribution grid, voltages at the PV connection points will rise when the PV generation is high and the captive load is low [16]. Therefore, the voltage rise is considered as one of the dominant issues that limit the ability of the distribution grid to host high PV penetration. As a remedy, or to maintain the voltage within statutory limits, On-Load Tap Changers (OLTC) with smart distribution management systems (S-DMS) [17] are used. Short-term solar forecasting is one of the main building blocks in S-DMS as it is required to predict the network status to control and manage the smart inverters and smart transformers.

Furthermore, short-term forecasting will be beneficial to overcome the partial shading condition that occurred due to passing clouds, which is another major problem in PV systems [18]. In partial shading conditions, reconfigurations of shaded and nonshaded modules in PV arrays enhance the power output by distributing the shading effects evenly [19] without changing the physical location. The short-term cloud shadow forecasts at the location of the PV plant are used as a control signal to the array reconfiguration process to minimize the effect due to the partial shading especially on large-scale PV plants. Further, solar forecasting is used as an input to smart battery management systems to compensate for the PV shortages [20]. Furthermore, short-term solar PV forecasts are used in energy market activities. For example, in Australia, 5-minute PV forecasts are used in the operations in the Australian National Electricity Market (NEM) [21].

Short-term fluctuations in a PV plant mainly occur due to the shadows of clouds and shadows created by fixed objects like buildings, trees, and mountains. The power output of a PV plant due to the shadows created by fixed objects is deterministic as shadows can be mapped onto the PV plant according to the zenith and azimuth angle of the sun at a given time. However, the effects created by the shadows of the clouds vary from time to time depending on cloud velocity, cloud size, cloud position on the sky, cloud thickness, and the texture of the cloud. Hence, it is a random behaviour that requires stochastic prediction as opposed to the deterministic part, which requires extrapolation. Therefore, this paper is focused on a physical forecasting model developed based on cloud shading on PV plant generation.

For short-term power forecasting or real-time power prediction, cloud information from ground-based sky images and time series models based on historical data is widely used [11, 22, 23]. Sky image-based PV forecasting approaches reported in the literature are summarised in Table 1 with the methodology used.

There are some limitations in the existing sky image-based forecasting methods and equipment used to get the data for the forecast. For example, the camera used in [10, 2426] are expensive to install on a large scale, and some camera systems have proprietary software. Further, the local cloud base height (CBH) information used in [10, 25, 27] is obtained from ceilometers located around 10 km away from the sky imager. This will potentially introduce significant shadow position errors when mapping the cloud shadow onto the ground. Furthermore, the methods presented in [28, 29] need previous irradiance measurements to obtain the forecasts.

The deficiency of the technique introduced in [10, 27] is that the entire cloud area is assumed to be moving at a uniform velocity throughout the image without considering individual cloud movement. Furthermore, in [10], GHI drops due to the shadow of the clouds are assumed to be equal to a constant percentage drop. The use of a single GHI dropping percentage is not robust since the decline of the GHI level may be different according to the thicknesses of the cloud.

The cloud tracking using the Lucas-Kanade optical flow algorithm in [30] needs a higher image capturing frequency to obtain smooth cloud movement. Furthermore, the major weakness of the model presented in [31] is that this model is an onsite forecasting model. Moreover, Reference [32] used the tracking details to find the changes in the features in the sun-blocking window and it did not use cloud motion tracking and shadow casting directly to the forecasting model.

By considering the limitation of the state-of-the-art methods, this paper introduces a multiple-site irradiance forecasting model improved based on the CBH calculation using two cameras located in two different locations. A new cloud pixel identification method was introduced to identify cloud areas in the sky image. Furthermore, a novel approach was introduced to classify the cloudy pixels, where the cloudy pixels were divided into three groups based on the color properties of each cloud pixel. The irradiance dropping factor was defined using the cloud pixel category. Instead of assuming a single dropping factor in [10, 14], an irradiance dropping factor based on image cloud color property was introduced.

Furthermore, individual clouds were tracked separately without assuming the entire cloud area is moving at a uniform velocity throughout the image. A normalized cross-correlation algorithm was utilized to estimate the cloud motion vectors, which is more convenient than the other optical flow techniques. Here, the CCM [34] was applied to each cloud separately, without taking the total cloud area as one segment. This method enables the determination of multiple layer cloud movements. Finally, irradiance forecasts were obtained for onsite PV system (e.g., location 1 in Figure 1) and as well as for PV systems located away from the main camera (e.g., location 3 in Figure 1), by utilizing clear day irradiance profile generated using the ASHRAE clear-sky model [35] together with irradiance drop percentage corresponding to the cloud category and drop occurrence time.

This paper is structured as follows: Section 2 describes the methodology, which provides a detailed description of the new forecasting model covering the cloud segmentation, motion tracking algorithm, and the cross-correlation-based cloud base height calculation method. Section 3 is a case study developed based on the methodology, and it provides the details of the developed hardware setup and the results. The conclusions are presented in Section 4.

2. Methodology

2.1. Data

Visual measurements of the full sky area with a high spatial and temporal resolution are needed to obtain accurate irradiance forecasts from cloud motion tracking. Therefore, the sky images were captured by a camera with a large FOV, which enables to get most of the clouds that make a shadow on the location of the plant and to track the cloud for a longer time duration. Thus, it increases the forecasting time horizon. Obtaining images at a higher spatial and temporal resolution enables us to accurately track the cloud movement. Figure 1 illustrates how two different FOVs capture clouds. Low FOV (FOV 1) lenses capture a small area of the sky. Thus, it might not capture enough details to forecast the irradiance.

In addition to sky images, to forecast the solar irradiance using the movement of the cloud shadows, (a) the direction of the sky camera with respect to the true north, (b) zenith and azimuth angle of the sun, (c) clear day irradiance profile, and (d) CBH (height above the ground level) are needed. Therefore, the zenith and azimuth angles of the sun and the clear day irradiance profile were calculated based on the information in [19]. For the calculation, the details of time, date, longitude, and latitude of the PV system locations were required.

Typically, the clouds are placed at different layers of the sky. Therefore, the CBH of each cloud is different. The CBH of the clouds above a specific area can be obtained from a ceilometer, but generally, it is expensive. The average CBH values can be obtained from the nearest aviation centers or weather monitoring centers. However, they can introduce significant forecasting errors. Therefore, to forecast the power generation at PV plants located in the neighborhood of the camera location accurately, a local CBH estimation method based on two cameras was introduced. To obtain CBH using the proposed method (to get CBH, clouds captured from camera 1 need to be captured from another location), sky images obtained simultaneously from the second sky camera were considered (camera placed at location 2 as in Figure 1). The irradiance forecasts for several locations were obtained by extracting the image pixels that correspond to the shadow cast on specific PV locations using local CBHs (e.g., location 1 and location 3 in Figure 1).

2.2. Forecasting Methodology

This section provides details of the forecasting methodology developed to forecast irradiance at multiple PV sites. In this method, for one iteration, images taken during one-minute time interval were considered for all 1 min, 5 min, and 15 min irradiance forecasting. Figure 2 shows the flow diagram of the forecasting model. It is divided into nine main sections and is described in detail in this section.

2.2.1. Blue-Sky Area Separation

The correct identification of cloud regions from the sky image is critical as irradiance is forecasted based on the movement of those clouds. A cloud having a large vertical development has a color of a grey shade and creates a substantial drop in the ground level irradiance. Therefore, if a grey cloud is incorrectly identified as a blue-sky area, the result will be significantly erroneous. A blue-sky area separation method was developed to obtain correct cloud regions from the sky images to alleviate this.

In this process, white cloud pixels and pixels of the blue color sky area were separated based on their red and blue component values. However, as discussed in [27], it is not possible to separate pixels related to grey clouds and blue-sky areas only using , , or values. Therefore, to separate only blue-colored pixels from the sky image, color space was introduced. The color space enables to separate bluish or reddish color components [36] in which is the luminance in the color plane, is the chrominance dominated by the blue color, and is the chrominance dominated by the red color. Since is strong in places of bluish colors (blue-sky area), it was used with a threshold value to separate the blue-sky area from the sky image. , , and components were obtained from pixel values usingwhere , , and are , , and components of th pixel and , , are , , and components of the th pixel.

As the initial step of the blue-sky area separation process, the images captured on sunny, overcast, and partially cloudy sky conditions were manually chosen. From the selected images, pixel indexes related to blue-sky area, white cloud area, and grey cloud areas were extracted manually (equal number of pixels was selected for three categories), and they were labeled. Then, three arrays for three-pixel classes were created from the labeled pixels, and the selected images were converted into plane images using (1)–(3). After that, , , and components of the selected pixel indexes were extracted and placed in the corresponding pixel arrays. Then, each data array was split into two datasets as training dataset and testing dataset. After that, using , , and components, a 3D scatter plot was generated from the training dataset. In the scatter plot, three different colors were used to represent the three-pixel categories to understand different clusters. The most dominant property for the classification of pixels related to clouds and the blue area was identified from the scatter plot.

Further, the properties that cannot be used for separation were omitted. Then, considering pixels related to clouds (grey pixels and white pixels) and pixels related to blue-sky area, a histogram was generated for the dominant property ( or or ). After that, the threshold value for the dominant component was found so that the erroneous pixel counts related to both clouds and blue-sky areas are cancelled out. Then, considering the generated scatter plot and the selected threshold value of the dominant pixel separation property, the threshold values for the other parameters were found. Finally, the method was validated using the testing dataset. This cloud pixel identification method was applied to the raw image to generate a binary cloud image where white blobs represent the clouds or the sun and the black area represents the blue-sky region.

The boundaries of the binary image obtained from the thresholding method consist of jagged edges. To smooth out the image boundaries and to remove image noises, image filters such as median filter, Wiener filter, and statistic filtering functions in Matlab® are widely used [37]. To find out the best filter (less edge distortion filter) for cloud boundary smoothing, three filters were applied to the binary image. The correlation coefficients between the filtered images and the original binary sky image were obtained. Then, the filter corresponding to the highest correlation was chosen for the cloud boundary smoothing.

2.2.2. Individual Cloud Identification

In the blue-sky area separation process, cloud (this may include the sun as well, but it will not be an issue as it does not show any movement) and blue-sky areas were separately identified. In this section, individual clouds were identified separately from the binary sky image generated in the previous section using the connected component algorithm. This is essential to track the individual cloud movement to obtain the cloud moving velocity. Further, using the connected component algorithm, the number of white blobs and the total number of pixel counts in each blob were obtained as they are required under the sky categorization process (explained under Section 2.2.3).

2.2.3. Sky Categorization

The first image in each image group (1 min image set) was categorized into one of the categories: sunny, partially cloudy, or overcast depending on the number of white blobs and on the percentage of white pixels in the binary image. The white pixel percentage was calculated using (4). Then, if the image was classified into partially cloudy or overcast sky conditions, the cloud tracking algorithm was applied to the image set.

2.2.4. Cloud Pixel Categorization

In the cloud pixel categorization process, pixels in the cloud area were categorized into pixels related to thick clouds, white clouds, and bright white clouds by considering the different grey levels of the cloud pixels in the image. For the three cloud pixel categories, irradiance dropping factors were found by comparing the onsite irradiance measurements corresponding to each pixel category with the clear day irradiance measurement. Since the irradiance was forecasted considering the whole cloud, a single irradiance dropping factor was obtained for each cloud. For that, the clouds were categorized as thick clouds, white clouds, and bright white clouds and the irradiance dropping factors of the clouds were assigned based on the pixel category of which the highest number of pixels available in the cloud (mode of the pixel category in the cloud).

2.2.5. Cloud Pixel Tracking and Velocity Extraction

Clouds can be found at different heights in the sky, and depending on the height, they may have different velocities. The cloud velocity provided by weather forecasts usually provides global velocity information. As the accurate prediction of PV drops and shading effects require locally extracted velocities, this section describes a cloud velocity estimation method for individual clouds.

The identified individual clouds (in the first image, Im1) and the other images in the image set were used as the input data to this process. If the first image was classified into partially cloudy or overcast sky condition, a set of pixel coordinates inside the separately identified cloud regions were selected for tracking using an iterative process such a way that the distances between pixel coordinates in the direction of or have the same pixel difference.

To track the points from one image to the next image, Matlab® normalized cross-correlation function was used [34]. To apply cross-correlation, a template image and a search window were selected. To track a point from the first image to the second image, an pixel area around the coordinates on the first image frame was selected as the template image. Following this, an pixel area () around the coordinates on the second image frame was selected as the search window. The red component (of the image plane) of the template image and search window image segment were considered in the cross-correlation function.

In this process, it was assumed that the image with coordinates does not move beyond pixel area over the time interval (time between two images). The maximum correlation points were selected as the corresponding points for the next image. Since the images were captured at a high rate, the shape of the cloud change is negligible.

The cross-correlation method was applied again to track the points from the second image frame to the third image frame. Likewise, these steps were repeated for all images and for all clouds. If the image set has six image frames, there are five movement vectors for each selected cloud point from the first image frame to the sixth image frame and they were calculated using the difference between the and coordinates of each point and . Then, the point moving velocity throughout the image set was assigned as the average of the frame-to-frame velocities of that point. After that, histograms of pixel velocity magnitudes and angles were created. Then, the magnitude of the velocity that related to the highest point count in the velocity magnitude histogram and the angle of the velocity that related to the highest point count in the angle histogram were assigned as the velocity vector of the cloud.

After that, the cloud moving velocity was obtained as the median speed and the median direction of the selected points in the cloud.

The above-mentioned cloud motion tracking step is not required if the sky image was categorized as a sunny sky category as in that situation, there will be only one white blob related to the sun.

2.2.6. Cloud Base Height (CBH)

Accurate cloud base height (CBH) details are required to forecast irradiance for multiple PV sites located within few kilometers away from the camera location (location 1) because CBH is used to find out the image area that creates a shadow on the PV plant location. When considering the cost and accuracy, the calculation of the local CBH using sky images [3840] is the best option; hence, a sky image-based CBH calculation method was used in this paper.

In the sky image-based CBH calculation method, at least two wide-angle cameras are required. The clouds that belong to the overlapping image area (common area as marked in Figure 3) of the two cameras were used to calculate the CBH. The overlapping area of the two images and the minimum value for the CBH () vary with the FOV and the distance between the two cameras. The clouds which are below cannot be seen in both the images. In other words, the clouds located below cannot be captured by both cameras at the same time. Therefore, it is important to calculate the distance between two cameras considering the average minimum CBH (“”) at the PV location and place them accordingly.

The following assumptions were made about the two-camera system when calculating CBH:(i)Both cameras were placed in a horizontal plane(ii)The camera height from the base was assumed as zero, and no height difference was considered in the two cameras(iii)No vertical development of the cloud (the effect of vertical development of the cloud is considered when calculating the irradiance drop percentage)(iv)Since two identical cameras were used, the equidistant mapping functions of both the cameras were the same

Figure 3 shows the placement of two cameras for CBH calculation and how a sample point in the cloud is represented in the sky image.

To calculate CBH, initially, both images were aligned to the north. Figure 4 shows the flowchart detailing the CBH calculation. Mainly, few cloud boundary points on the binary sky image 1 (from location 1) were selected . Here, the boundary points of the clouds were found by applying Matlab® “bwboundaries” function to the binary cloud image. Considering possible distortions at the boundary of the image obtained from the fisheye lens camera, the cloud boundary points closer to the center of the image were selected (by checking the distance of the boundary point relative to the center point of the image). From the selected points, a number of points were randomly selected via a random function for the calculation of CBH. Then, by assuming different CBH values, the positions of the randomly selected cloud boundary points were mapped onto the other sky image captured by the second camera placed at location 2 using the mapping function of the camera lens. Equations (5)–(9) provide the mathematical equations used to map the cloud boundary points of sky camera 1 to sky camera 2 for different cloud-based heights.

Following the above, image segments around cloud boundary points were selected as the template images and the image segments around the points were selected as the search window in CCM. Since the vertical height of the cloud above the base of the cloud was assumed to be relatively small compared to CBH, the appearance of the clouds is assumed to be similar in both images.where “” is the cloud base heights, ; “FOV” is the field of view of the camera; are the image coordinates; and “” is the maximum image radius.

Some boundary points have similar maximum cross-correlation values for adjacent CBH values. Due to the fisheye lens distortion, the points corresponding to higher CBHs were mapped closer to each other. Therefore, the corresponding point lays inside few searching windows (since the searching windows in the high CBHs overlap) and gives a similar maximum cross-correlation value. The distance between the center points of the search window and the corresponding maximum cross-correlation points were compared to obtain an accurate value for CBH. Thus, the CBH corresponding to the correlation point which was located near the center of the search window was selected as the CBH. This method was applied to all other cloud boundary points, and the average value was taken as the CBH.

2.2.7. Area of the Image That Generates a Shadow on the PV Plant

In the onsite irradiance forecasting method, the camera is placed at the PV site. Therefore, if a cloud comes in between the sun and the PV plant, it was identified from the sky image (when the location of the sun on the image was covered by the cloud). The location of the sun on the image was found by using camera orientation, longitude, latitude, camera mapping function, date, and time [41].

In the multiple-site irradiance forecasting, since the PV plants are located few kilometers away from the camera location, the sky image locations which create shadows on the PV plants were found according to the calculated CBH, camera orientation, longitude, latitude, camera mapping function, date, and time.

2.2.8. Time for Occlusion

The irradiance drop occurrence time is defined by the speed of the clouds that are directed towards the PV plant. For the onsite irradiance forecasting, it is the time taken by the cloud to reach the point of the sun whereas, in the multiple-site irradiance forecasting, it is the time taken by the cloud to reach the point that makes a shadow on the PV plant. The irradiance drop occurrence time was calculated using the following steps:(i)The point (in the last image of the image set) that creates a shadow on the PV plant was found (point “” shown in Figure 5)(ii)A number of points inside each white blob of the last image frame were selected in such a way that the distances between points in the direction of or are the same(iii)After that, the backward mapping function of the camera was applied for the points(iv)Then, the points with a possibility of resulting in a shadow on the PV plant in the expected forecasting time period were found via the motion vector of the cloud and point “.” It is done by identifying the points inside an area covered by two lines drawn with an angle of +15° and -15° relative to the motion vector starting from the point “” as indicated in Figure 5(v)According to the calculated velocity of the cloud under Section 2.2.5, the time taken by the selected points to pass the location “” was calculated(vi)Then, according to the calculated time for each point to pass the location “,” the minimum () and mean () occlusion values were obtained

If the location “” is not covered by a cloud at the beginning of the forecasting, the minimum occlusion time is the starting time for occlusion of the cloud. If it is already covered by a cloud at the beginning of the forecasting, the minimum occlusion time is the time taken by the nearest selected point to the point “.

2.2.9. Irradiance Forecasting

According to the minimum and mean occlusion times ( and ) obtained under Section 2.2.8, the irradiance drop was predicted for 1 minute, 5 minutes, and 15 minutes ahead of real-time, as in the flow diagram shown in Figure 6. For the sunny sky condition, since there is no occlusion time, the irradiance is the same as of clear day irradiance (obtained from the ASHRAE model [35]). If the minimum time for occlusion is less than or equal to the forecasting time horizon, there is an irradiance drop, and it was calculated by multiplying the clear day irradiance profile with the corresponding irradiance drop percentage factor obtained for different cloud types (thick grey clouds, white clouds, and bright white clouds). Furthermore, if the minimum time is greater than the considered forecasting time horizon, there will not be an irradiance drop. Hence, the irradiance forecast is equal to the clear day irradiance value. As an example, if the forecasting time horizon is 5 min and the minimum time for occlusion is less than or equal to 5 min, irradiance drop will occur, and if it is greater than 5 min, irradiance drop will not occur.

2.3. Error Metrics and Forecast Performance

Root mean square error (RMSE), mean absolute error (MAE), and true drop identified percentage were calculated to evaluate the results [29, 42].

The RMSE and MAE were calculated using the predicted irradiance () and measured irradiance () as given by

The percentage of the accurately identified irradiance state (drop or not/1 or 0) from the forecasting model was calculated using (11). The “true state” in (11) is the total number of correctly identified irradiance states, and the “false state” is the total number of incorrectly identified irradiance states (incorrect drops and missed drops) [43].

The skill factor indicates the performance of the short-term forecasting models with respect to the persistent model. The forecast accuracy depends on weather conditions and forecasts temporal and spatial resolution. Therefore, forecast accuracies are not comparable site-by-site or hour-by-hour unless normalized by a benchmark. The forecast skill is a way to normalize forecast accuracy [42]. Therefore, the skill factor was calculated for the forecasting results.

The persistence method was defined as the measured irradiance at a time “” equals to the irradiance at a time “” (where is the forecasting time horizon). The skill factor was calculated using (12), where is the root mean square error of the persistence method and is the root mean square error of the proposed method. SF was calculated for each forecasting time horizon.

3. Case Study

3.1. Experimental Setup

Two inexpensive cameras were developed to evaluate the forecasting methodology. To form the sky camera system, a Raspberry Pi single-board computer (3rd Generation, Model B) (US$35), a micro-SD card with a capacity of 32 GB (US$15), and a programmable high-resolution Pi camera (US$20) with a fisheye lens (US$20) were used.

The installed sky camera systems are shown in Figure 7. The encapsulated Raspberry Pi (RPI) board, together with a programmable high-resolution Raspberry Pi camera module with a wide-angle lens, enables to grab a vast area of the sky onto the image. The Raspberry Pi camera and the RPI board were placed inside a weatherproof enclosure. Both sky camera systems were operated remotely and were programmed to automatically capture images from 8.00 am to 4.45 pm at a rate of 10 seconds. The resolution of the captured images was pixels, and they were stored in jpeg format.

One camera was installed closer to a rooftop PV plant (at location 1), and the second camera was placed 1.9 km away from the first camera (location 2), as shown in Figure 7(a), to collect data. Location 1 is at the Sustainable Buildings Research Centre (SBRC) at the Innovation Campus, University of Wollongong, Australia, while location 2 camera is at the roof of Building 35 at the Main Campus of the University of Wollongong, Australia.

The irradiance levels were forecasted for location 1 and location 3 using the images captured from camera 1 at location 1. The camera fixed at location 2 is only used to obtain the images for CBH. PV data were collected from location 1 to test the onsite forecasting model. To test the multiple-site forecasting model, location 3 (Building 28 rooftop PV system at the main campus, University of Wollongong), which is located with a distance of 2 km from camera 1 (location 1), was selected as shown in Figure 7(a) (location 1 (34.40°N, 150.90°E) and location 3 (34.41°N, 150.88°E)). Power measurements were acquired from both locations 1 and 3 with a 1-minute resolution to evaluate the forecasting accuracy.

3.2. Irradiance Forecasting Methodology Implementation

This section describes how the irradiance forecasting results were obtained via the experimental setup described in Section 3.1 and the methodology described in Section 2.2.

3.2.1. Blue-Sky Area Separation

As the initial step of irradiance forecasting, the blue-sky area separation process was carried out. The separation of blue pixels (noncloudy pixels) from the sky image was considered in this step and was done using the threshold method. The training and testing processes were carried out using 50 sky images selected related to the three sky conditions (sunny, overcast, and partially cloudy). For that, a set of six-hundred-pixel indexes related to the blue area, white clouds, and grey clouds were selected from the above-selected sky image set. Then, the corresponding , , and components of the selected pixel indexes were extracted, and a scatter plot was generated. Figure 8(a) shows the scatter plot generated via the corresponding , , and components of the selected pixel indexes.

As in Figure 8(c), two main clusters were identified, and it is seen that a threshold of component separates the pixels related to clouds and pixels related to the blue area of the sky image. Therefore, the component was selected as the dominant component that was used to separate bluish pixels from the image. As shown in Figure 8(b), component cannot be used for separation; therefore, it was omitted. The normalized histogram was generated to determine the threshold value for cloud separation, as shown in Figure 8(d) (as in Figure 8(c), since and components related to grey and white cloud pixels are in the same range, both grey and white cloud pixels were considered as the total cloud pixels). The classification decision boundary was adjusted to minimize the classification error, and the optimal location as per histograms illustrated was found to be 0.58. Hence, the thresholding was done based on the actual probabilistic model generated via a manual labeling process. Then, using Figure 8(c) and the selected threshold value (0.58), a threshold value for was obtained as 0.46 in such a way that all pixels related to the blue area in the sky image are filtered out. For example, if the value of a pixel is greater than 0.58 and the value is less than 0.46, it was taken as a pixel related to the blue area in the sky image (noncloud pixel). Then, using threshold values obtained for both and components, binary cloud images were generated.

A dataset of 100 pixels (25 white cloud pixels, 25 grey cloud pixels, and 50 pixels related to the blue-sky area) was tested to validate the method. The accuracy of identification of the correct pixel category was around 96%.

As discussed in Section 2.2.1, three image filters, median filter, Wiener filter, and statistic filter, were used as image noise filters and as boundary smoothing techniques. After applying three filters to the binary image, the correlation between the filtered image and the binary image was found. The correlation coefficients obtained from the median filtered image, Wiener filtered image, and statistic filtered image were 0.996, 0.882, and 0.993, respectively. Therefore, the median filter was selected as the cloud boundary smoothing technique since it smooths the cloud boundary and maintains the shape of the clouds than the other two filters.

3.2.2. Individual Cloud Identification

The next step of the method is to identify the individual cloud areas on the sky image. Figure 9(a) shows the raw image with the calculated position of the sun, Figure 9(b) shows the identified white patches including both the sun and clouds in the sky image, and Figure 9(c) shows the separately identified cloud regions (area without the sun).

3.2.3. Sky Categorization

Then, the sky image was categorized into three sky categories as defined in Table 2 according to the white pixel percentage (WPR) of the binary image.

Figure 10 shows the white pixel percentage of the first image in each image set from 8.30 am to 4.45 pm (1 min samples).

3.2.4. Cloud Pixel Categorization

Figures 11(a) and 11(b) show the image and the identified cloud areas. By considering the different grey levels of the cloud pixels in the image, the cloud pixels were categorized into three categories, as shown in Figure 11(c).

Three irradiance drop levels (C1, C2, and C3) were established for the three cloud groups. For thick clouds (C3), the irradiance was assumed to be 30% of clear day irradiance value, and for white clouds (C2), it was assumed to be 40%. On the other hand, for the bright white clouds (C1), the irradiance was assumed as 50% of clear day irradiance.

3.2.5. Cloud Pixel Tracking and Velocity Extraction

After the categorization of the sky condition, the cloud tracking process was performed on each cloud to obtain the cloud velocity. As the first step, few points were selected inside the identified blobs in such a way that the distance between selected points’ coordinates is 50 pixels (as in Figure 12(a)). Then, to apply the CCM for each point, the sizes of the template image and the search window were found as follows. The template image size () was selected to a lower value ( is set to 40 pixels) than that of 50 pixels so that only one point coordinate will lay in the image template. To find the search window size, the below process was considered. According to Reference [44], the average wind speed in the Wollongong area is taken as 19 km/h. The minimum cloud base height considered in our model was 500 m, and the time difference between the captured two image frames is ( =) 10 s. Therefore, the maximum movement of the cloud between two image frames is around 31 pixels. Therefore, the search window size was set to 70 pixels (). Figure 12(b) shows the tracked cloud points over six iterations, and Figure 12(c) shows the average motion vectors of the selected points.

Figures 13(a) and 13(b) show the histograms of pixel velocity magnitudes and angles of the selected cloud. The magnitude of the velocity that related to the highest point count (bin count) in the velocity magnitude histogram and the angle of the velocity that related to the highest point count in the angle histogram was assigned as the cloud moving velocity vector.

Figure 14(a) shows the raw image corresponding to the overcast sky condition, and Figure 14(b) shows the identified location of the sun (marked in red marker). Separately identified blobs, selected cloud points inside the cloud blob, and tracked cloud points are shown in Figures 14(c)14(e). Motion vectors of the selected cloud points and the histograms for velocity angles and velocity magnitudes are shown in Figures 14(f)14(h), respectively.

3.2.6. Cloud Base Height (CBH) Calculation

Then, the cloud base height calculation process was implemented using the images captured from camera 1 and camera 2. According to the FOV (=140/2) of the cameras and the distance (=1.92 km), the minimum CBH (“”) was calculated as 350 m.

To get CBHs, initially, the cloud boundary points on images obtained from camera 1 were found using Matlab® “bwboundaries” as shown in Figure 15(a). Then, cloud boundary points closer to the center of the image were selected in such a way that the distance of the cloud boundary to the center of the image is less than 250 pixels (the radius of the full image is 366 pixels). Out of those points, 10 points were randomly selected as shown in Figures 15(b) and 15(c) for the calculation of CBH. Figure 15(d) shows a single point that was selected for the illustration. By changing the CBH from 400 m to 10 km with 200 m steps, the coordinates of the selected boundary point were mapped on to sky image 2 as in Figure 15(e). As seen in Figure 15(f), for the selected point, there are similar maximum cross-correlation outputs for different CBH values. Then, the distance between the center points of the search windows and the corresponding maximum cross-correlation points were mapped in the same plot of Figure 15(f) (brown color) and the minimum distance was assigned as the CBH.

3.2.7. Area of the Sky Image Obtained at Location 1 That Creates a Shadow on the PV Plant, Time for Occlusion, and Irradiance Forecast

The selected points inside the cloud are shown in Figure 16(a), and the image location (“”) that creates a shadow on the PV plant at location 1 is shown in Figure 16(b). Then, the cloud area required for the irradiance forecasting was selected according to the direction of the cloud motion vector and the image location (“”) as shown in Figure 16(c). Since the images were obtained via a wide-angle lens, to get the corresponding area in the image, the backward mapping function of the camera was applied for the points and obtained as shown in Figure 16(d).

The time for occlusion was obtained according to the cloud moving velocity. Then, the irradiance profile was forecasted by multiplying the clear day irradiance value with the irradiance dropping factor of the cloud category as shown in Figure 17. Figures 17(a)17(c) show the irradiance profile obtained from the onsite (for location 1) forecasting model for the three forecasting time horizons (1 min, 5 min, and 15 min), and Figures 17(d)17(f) show the irradiance profile obtained from the multiple-site forecasting model (for PV system at location 3) for the three forecasting time horizons. The irradiance is forecasted from 8.15 am to 3.40 pm with 1 min granularity.

3.3. Performances of the Forecasting Model

Sunny days, overcast days, and partially cloudy days that are randomly selected from 09/04/2019 to 12/09/2019 were used to illustrate the performance of 1 minute, 5 minutes, and 15 minutes ahead irradiance forecasts of two forecasting models: onsite model and multiple-site model.

The images captured between 8.30 am and 4.30 pm (local time) (with timestamps) were used for irradiance forecasts. For a single day, the experimental dataset contains images from two sky cameras (which were time-synchronized) and PV power measurements obtained from the two locations. The irradiance is forecasted from 8.30 am to 4.30 pm with 1 min granularity. Therefore, for each day, forecasts will be generated (a total of 480 data points were forecasted).

Only the data obtained from camera 1 was used for the onsite forecasting model, and forecasts were generated for the PV system at location 1. For the multiple-site forecasting model, data obtained from both cameras were used, and camera 2 was used to calculate the CBH only. From this model, the irradiance forecasts were obtained for location 3, which is 2 km away from camera 1.

The measured PV power values obtained from the rooftop solar PV plants were transformed into the irradiance using the capacities of both rooftop PV plants to compare the forecasted irradiance with the measured irradiance.

3.3.1. Onsite Forecasting

RMSE, MAE, skill factor, and the percentage of true irradiance state identification of onsite irradiance forecasting results are given in Table 3.

Further, to compare the effectiveness of applying three different irradiance dropping levels (found according to the cloud color properties) with a constant irradiance dropping factor, irradiance forecasts for 40% drop assumption (as used in [10, 14]), RMSE, MAE, and SF were calculated. As shown in Table 3, errors in the single irradiance drop assumption are higher than that of the forecasts obtained from three-level cloud categorization for partially cloudy days and overcast days. Figure 18 compares the average of the MAE obtained from three-level cloud categorization with the single drop level assumption method for a sunny day, partially cloudy day, and overcast day and shows that it is lower in the three-level cloud categorization.

3.3.2. Multiple-Site Irradiance Forecasting

Table 4 shows RMSE, MAE, SF, and TS of the forecasting results obtained for a location (location 3) 2 km away from camera 1. It compares the individual cloud movement-based tracking model introduced in this paper with the tracking method used in [10], where the entire cloud area is assumed to be moving at a uniform velocity throughout the image (without considering the individual cloud movement).

Figure 19 compares the average percentage of correctly identified true irradiance state (TS) for sunny, partially cloudy, and overcast days from the forecasting model introduced in this paper and from the method in [10] for the three forecasting time horizons. As in Table 4 and Figure 19, when considering the percentage of correctly identification of irradiance state (TS), the individual cloud movement tracking-based forecasting model introduced in this paper performed better than the method in [10] (which assumed that all clouds move in the same velocity).

Figure 20 shows the average MAEs obtained from the three-level cloud categorization introduced in this paper, single drop level assumption (40% drop), and MAEs for the results obtained from the forecasting model introduced in [10] for three forecasting time horizons. Highest MAE was received from the forecasting model introduced in [10].

When considering the skill factor, the SF remains positive for 15 min forecasting time horizon for location 3 for sunny and partially cloudy days, and this implies the model introduced in this paper outperforms the persistent model.

Reference [33] introduces a solar irradiance forecasting model based on surface irradiance mapping. In that model, initially, the mapping relationship between the information of the cloud pixels and irradiance was established, and then a sky image-irradiance mapping model is developed. When establishing the mapping model, values of the circular sky region, which will cover the location of the sun on the image in the next 10 min, were extracted as model input, while the corresponding irradiance was selected as the output. With the sky image—irradiance mapping methods trained using BPNN and SVM, average RMSE values of 117 Wm-2 and 116 Wm-2 were achieved. In comparison, the method introduced in this paper achieved an average RMSE of 108 Wm-2, as shown in Table 5. Therefore, the forecasting model introduced in this paper performs better than [33].

4. Conclusion

According to the current trend, the percentage share of solar PV in electricity generation will be drastically increased in the coming years. Hence, impacts on power systems due to variations of solar PV power generation need to be addressed. As a solution, in this paper, a novel short-term irradiance forecasting algorithm is presented. The forecasting model presented here makes use of an inexpensive ground-based sky imaging system. The inexpensive camera with a wide-angle lens captures a broad area of the sky; thus, it increases the forecasting time duration. As the shape of the clouds changes with time, a cloud segment tracking method using a cross-correlation algorithm was introduced instead of the optical flow algorithm based on cloud feature point tracking.

Furthermore, the cloud in the different layers in the sky has different moving velocities. Therefore, the proposed individual cloud tracking method allows finding the moving velocity of each cloud in different layers. Further, the results obtained from the proposed method were compared with a method that assumes all clouds move at an average velocity and proved that the method introduced in this paper performs better.

Irradiance forecasts were obtained for 1 minute, 5 minutes, and 15 minutes using the proposed method. The method has the capability of forecasting the onsite irradiance changes 1 minute in advance with 80% of accuracy and 5 minutes in advance with 78% of accuracy. For the results obtained for 15 minutes, the forecasting time horizon has a positive skill factor indicating that the method introduced in this paper is better than the persistent model.

Since the PV power generation can be directly obtained from the irradiance forecasts, the short-term irradiance forecasts are helpful to overcome the problems caused due to the intermittency of solar PV generation. For example, the forecasting of pending energy shortfalls is useful for the management of PV inverters and energy storage systems. Further, PV forecasts can be used to predict the network status to control and manage the smart inverters and smart transformers. Furthermore, accurate solar irradiance forecasts derived from the proposed method helps to schedule solar PV generation in large interconnected networks. The proposed image-based method shown to perform best for the highly volatile condition in the partially cloudy situation is promising. Further, a hybrid method based on the type of day predicted beforehand may be even more useful. Therefore, the forecasting method proposed in this paper should be further developed considering the above applications while integrating with network models, market models, and intelligent control models.

Abbreviations

NEM:Australian National Electricity Market
CCM:Cross-correlation method
CBH:Cloud base heights
:Red
:Blue
:Green
RBR:Red to blue ratio
SP:Sunshine parameter
GHI:Global horizontal irradiance
RBD:Difference of each pixel red and blue values
FOV:Camera field of view
TSIs:Total sky imagers
USI:The University of California, San Diego designed and developed a sky imager system
ANN:Artificial neural network
VOF:Variational optical flow
UCSD sky imager:Camera developed by the University of California, San Diego
SBRC:Sustainable Buildings Research Centre
RPI:Raspberry Pi
:Image pixel’s luma component
:Blue-difference chroma component
:Red-difference chroma component
:The average time between two image frames
WPR:White pixel ratio
:Minimum cloud base height
RMSE:Root mean square error
MAE:Mean absolute error
SF:Skill factor.

Data Availability

The sky images and irradiance data used to support the findings of this study have been deposited in the Mendeley Data repository (doi:10.17632/cb8t8np9z3.2).

Conflicts of Interest

The authors declare no conflict of interest.

Acknowledgments

The authors would like to acknowledge the collaboration between the University of Wollongong and the University of Peradeniya as a part of the ongoing Joint Ph.D. program.