Recurrent Cascade-Correlation (RCC)

Recurrent Cascade-Correlation (RCC) is an extension of the Cascade-Correlation (CC) learning architecture for training artificial neural networks. It was developed by Scott E. Fahlman and Christian Lebiere in 1990. While the standard CC algorithm is designed for feedforward networks, the RCC is designed for recurrent networks, which have feedback connections that allow information to loop back within the network. This makes RCC suitable for processing time-series data and solving problems that involve sequences or patterns that unfold over time.

Purpose and Role: The purpose of RCC is to train neural networks to handle problems with temporal dependencies, such as speech recognition, natural language processing, and time-series prediction. The role of RCC is to create a network architecture that can learn complex patterns and adapt to dynamic input-output relationships by incorporating feedback connections.


  1. Recurrent connections: Unlike feedforward networks, recurrent networks have feedback connections, which allow information to flow in loops within the network.
  2. Cascade-Correlation learning: RCC builds upon the CC algorithm, which adds hidden units incrementally to the network during training, minimizing the error between the network's output and the desired output.
  3. Hidden unit activation: In RCC, the activation of hidden units is a combination of the weighted sum of inputs and the weighted sum of recurrent connections.

Importance: RCC is important because it extends the capabilities of neural networks to handle time-dependent problems, which are prevalent in various domains, such as finance, weather forecasting, and natural language processing. By incorporating feedback connections, RCC allows the network to retain information about previous inputs, enabling it to learn complex temporal patterns and sequences.

Benefits, Pros, and Cons:


  1. Improved learning capabilities: RCC can handle problems with temporal dependencies, which is not possible with standard feedforward networks.
  2. Dynamic adaptation: RCC can adapt to changing input-output relationships, making it suitable for real-world applications where data patterns may change over time.
  3. Incremental learning: Like the CC algorithm, RCC adds hidden units incrementally, leading to faster training and improved generalization.


  1. Suitable for time-series data and sequence problems.
  2. Can learn complex patterns and adapt to dynamic input-output relationships.
  3. Faster training and improved generalization compared to traditional recurrent neural network training algorithms.


  1. Can be computationally expensive due to the recurrent connections and incremental learning process.
  2. The architecture may become complex as hidden units are added, which can lead to overfitting.
  3. Training can be sensitive to initial weight values and learning rate parameters.

Examples to illustrate key concepts:

  1. A finance company might use RCC to predict future stock prices based on historical data. The recurrent connections in the network would allow it to capture the temporal dependencies between price movements over time.
  2. A natural language processing application might use RCC to generate text based on the input of previous words in a sentence. The recurrent connections would enable the network to learn the context and generate more accurate and coherent text.

See Also